Links to external sources may no longer work as intended. The content may not represent the latest thinking in this area or the Society’s current position on the topic.
The growing ubiquity of algorithms in society: implications, impacts and innovations
Scientific Discussion meeting organised by Professor Sofia Olhede, Professor Patrick Wolfe, Professor Tony McEnery and Professor Neil Lawrence.
The usage of algorithms and analytics in society is exploding: from machine learning recommender systems in commerce, to credit scoring methods outside of standard regulatory practice and self-driving cars. The rapid adoption of new technology has the potential to greatly improve citizens’ experiences, but also poses a number of new challenges. This meeting will highlight opportunities and challenges in this rapidly changing landscape, bringing legal and ethics experts together with technologists to discuss implications, impacts and innovations.
Enquiries: contact the Scientific Programmes Team
Organisers
Schedule
Chair
Professor Tony McEnery, Economic and Social Research Council, UK
Professor Tony McEnery, Economic and Social Research Council, UK
Professor Tony McEnery has been appointed as the Economic and Social Research Council’s (ESRC) new Research Director. Professor McEnery is Distinguished Professor of English Language and Linguistics at Lancaster University and Director of the ESRC Centre for Corpus Approaches to Social Science (CASS). His work at CASS has been focused on bringing linguistic analysis to bear on a number of interdisciplinary contexts, focussing on topics as diverse as climate change, Islamophobia, medical communication and poverty. In February, Professor McEnery received the Queen’s Anniversary Prize at Buckingham Palace on behalf of CASS, for its work in 'computer analysis of world languages in print, speech, and online'. Professor McEnery has worked with scholars from a broad range of subjects, including accountancy, criminology, international relations, religious studies and sociology. He has also worked with an array of impact partners including British Telecom, the Department of Culture, Media and Sport, the Environment Agency, the Home Office, IBM and Research in Motion. Professor McEnery was also Dean of Arts and Social Sciences at Lancaster, and before that Director of Research at the Arts and Humanities Research Council. He joined ESRC as its Director of Research, on secondment from Lancaster University, in October 2016.
09:05 - 09:30 |
Transparency and Accountability
Christina Blacklaws, The Law Society of England and Wales, UK
Christina Blacklaws, The Law Society of England and Wales, UKChristina studied Jurisprudence at Oxford and qualified as a solicitor in 1991. She has developed and managed law firms including a virtual law firm. In 2011 she set up the Co-operative Legal Services family law offering, later becoming their Director of Policy. She is currently Director of Innovation at top 100 firm Cripps LLP. Christina holds a range of public appointments including member of the Family Justice Council, trustee of LawWorks and council member for the Women Lawyers Division. Christina is Vice President of the Law Society of England and Wales and will become President in 2018. She is an award winning published author, speaker and lecturer and frequent media commentator. |
|
---|---|---|
09:30 - 09:45 | Discussion | |
09:45 - 10:15 |
Algorithmic risk assessment policing models
This talk uses Durham Constabulary’s Harm Assessment Risk Tool (HART) as a case-study. HART is one of the first algorithmic models to be deployed by a UK police force in an operational capacity. The potential benefits of such tools will be discussed, the concept and method of HART considered and the results of the model’s first validation reviewed. The talk will critique the use of algorithmic tools within policing from a societal and legal perspective, focusing in particular upon substantive common law grounds for judicial review. Two linked proposals will be made - a concept of ‘experimental’ proportionality, and a decision-making guidance framework called ‘ALGO-CARE’ – which together could create a model that recognises the need for controlled algorithmic experimentation in the public sector while at the same time acknowledging and carefully managing any risks to individual rights. Dr Marion Oswald
Dr Marion Oswald
Marion Oswald is a Senior Fellow in Law, Solicitor (non-practising) and Head of the Centre for Information Rights at the University of Winchester. Following a career as an in-house lawyer with international technology companies and the UK Government, she joined the University of Winchester in 2009 establishing the Centre for Information Rights in 2012. Her research focuses on information technology, privacy and information law. Recent work includes a collaboration with Durham Constabulary to reflect upon the recent operational deployment of an algorithmic risk assessment tool within the force, and a consultation report relating to the depiction of young children on digital, online and broadcast media. She is an executive member of the British and Irish Law Education and Technology Association, and sits on the National Statistician's Data Ethics Advisory Committee.
|
|
10:15 - 10:30 | Discussion | |
10:30 - 11:00 | Coffee Break | |
11:00 - 11:30 |
Algorithms, ethics and data protection: a regulator's view
Abstract to be confirmed Carl Wiper
Carl WiperCarl Wiper has worked at the Information Commissioner’s Office since 2010. He is a Group Manager in the Policy and Engagement department at the ICO. He is currently responsible for producing ICO guidance on the GDPR, with a particular focus on profiling, transparency and accountability. He worked on the ICO’s award-winning paper on big data, AI, machine learning and data protection. He has also worked on European-level guidance on profiling and automated decision-making within the EU’s Article 29 Working Party. Before joining the ICO, he was an information manager in local government, and his career has been spent working in information management and research in organisations in the public, private and third sectors. |
|
11:30 - 11:45 | Discussion | |
11:45 - 12:15 |
Algorithmic regulation and the Rule of Law
This talk will first explore how we distinguish between law and regulation, explaining that regulation must be situated within the contours shaped by the law and the Rule of Law. After this, a specific type of computational law, based on data-driven legal technologies will be discussed. The ensuing artificial legal intelligence enables quantified legal prediction and argumentation mining which are both based on machine learning applications (co-called natural language processing). This will raise the question of whether the implementation of such technologies should count as law or as regulation, and what this means for their further development. The focus will propose the concept of ‘agonistic machine learning’ as a means to bring data-driven regulation under the Rule of Law. This entails obligating developers and users of these technologies to re-introduce adversarial interrogation at the level of the computational architecture. Professor Mireille Hildebrandt, Vrije Universiteit Brussel
Professor Mireille Hildebrandt, Vrije Universiteit BrusselMireille Hildebrandt is a lawyer and a philosopher. She is a tenured Research Professor at the Faculty of Law & Criminology of Vrije Universiteit Brussel, on ‘Interfacing Law and Technology’. She also holds a part-time Chair at the Science Faculty of Radboud University Nijmegen, on ‘Smart Environments, Data Protection and the Rule of Law’. Hildebrandt conducts research on the cusp of law, philosophy and technology, more specifically on the implications of artificial intelligence and algorithmic decision-making. She publishes profusely, e.g. her latest books on Smart Technologies and the End(s) of Law (Edward Elgar 2015) and Information, Freedom and Property (Routledge 2016). |
|
12:30 - 13:30 | Lunch |
Chair
Professor Patrick J Wolfe
Professor Patrick J Wolfe
Patrick J. Wolfe is a professor of statistics and computer science and EPSRC established career fellow in the Mathematical Sciences at University College London. He joined the faculty of University College London in 2012 after teaching and Cambridge and then Harvard and is the founding director of UCL’s Big Data Institute. Professor Wolfe is also a trustee and non-executive director of the Alan Turing Institute, the United Kingdom’s new national institute for data science, where he has played a leading role in establishing the institute and shaping its priorities through an extensive program of engagement with a diverse range of experts and stakeholders. A past recipient of the Presidential Early Career Award for Scientists and Engineers from the White House while at Harvard, he has provided expert advice on applications of data science to policy, societal, and commercial challenges, including to the U.S. and U.K. governments and to a range of public and private bodies. Professor Wolfe has recently been appointed Dean of the College of Science at Purdue University.
13:30 - 14:00 | Cat Drew | |
---|---|---|
14:00 - 14:15 | Discussion | |
14:15 - 14:45 |
How should we think about algorithmic accountability?
This talk will suggest that data and AI innovation requires a public licence to operate. Hetan will consider the changing notions of data ethics as technology changes. He will argue that making algorithms 'accountable' will be a key issue in retaining trust and trustworthiness. He will then review different options for this, including transparency, governance, monitoring outcomes. He will also suggest that there are needs to work at a higher level including the creation of professional standards and codes of ethics / conduct for data scientists. He will also discuss the wider regulatory challenges posed in this area and consider what policymakers and regulators should be doing. Hetan Shah, Royal Statistical Society, UK
Hetan Shah, Royal Statistical Society, UKHetan Shah is Executive Director of the Royal Statistical Society, an 8,000 member body with a vision of a world with data at the heart of understanding and decision-making. He is Chair of the Friends Provident Foundation, a grant making trust; and visiting senior research fellow at the Policy Institute, Kings College. He is a member of the IPPR Commission on Economic Justice, and also of the UK Social Metrics Commission. |
|
14:45 - 15:00 | Discussion | |
15:00 - 15:30 | Tea Break | |
15:30 - 16:00 |
Algorithms and multi-disciplinary research
Speaker: Rebecca Endean OBE, UK Research and Innovation, UK Abstract to be confirmed |
|
16:00 - 16:15 | Discussion | |
16:15 - 16:45 |
Transparency and Trust – legal liability for algorithimic decisions
Algorithimic decisions can give rise to legal liability, both for causing direct losses (such as in motor vehicle accidents) and for infringing fundamental rights. In either case, the law looks for an explanation of how and why the algorithm made its decision, i.e. for transparency of the decision-making process. But there is an important difference between ex ante and ex post transparency. The more complex the algorithm, particularly where it derives from machine learning, the more difficult it becomes to provide ex ante transparency. And there is a strong argument that by demanding ex ante transparency the law might limit the improvement of algorithmic decision-making. This talk explains the principles which should apply in deciding whether ex ante or ex post transparency is sufficient, or indeed whether a complete inability to provide explanations might be permissible. It also attempts to identify how lawmakers should decide between incentivising transparency via liability laws as opposed to mandating transparency through regulation. Professor Chris Reed
Professor Chris ReedChris Reed is Professor of Electronic Commerce Law at the Centre for Commercial Law Studies, Queen Mary University of London, where he was formerly Director of the Centre and subsequently Academic Dean of the Faculty of Law & Social Science. He consults to companies and law firms, having previously been of counsel to the City of London law firms Lawrence Graham, Tite & Lewis and Stephenson Harwood. Chris has worked exclusively in the computing and technology law field since 1987, and teaches University of London LLM students from all over the world. He has published widely on many aspects of computer law; his latest book is Making Laws for Cyberspace (OUP 2012), he is the editor and part author of Computer Law (7th ed. Oxford University Press 2011), the author of Internet Law (2nd ed. Cambridge University Press 2004), Digital Information Law: electronic documents and requirements of form (Centre for Commercial Law Studies 1996) and Electronic Finance Law (Woodhead Faulkner 1991), and the co-editor of Cross-Border Electronic Banking (2nd ed. Lloyd’s of London Press 2000). Research with which he was involved led to the EU directives on electronic signatures and on electronic commerce. The Leverhulme Foundation awarded him a Major Research Fellowship for 2009-2011 (see Making Laws for Cyberspace for findings). From 1997 to 2000 Chris was Joint Chairman of the Society for Computers and Law, of which he is an inaugural Honorary Fellow, and in 1997-8 he acted as Specialist Adviser to the House of Lords Select Committee on Science and Technology. Chris has acted as an Expert for the European Commission, represented the UK Government at the Hague Conference on Private International Law and has been an invited speaker at OECD and G8 international conferences. |
|
16:45 - 17:00 | Discussion |
09:00 - 09:30 |
Machine Learning and the Humanitarian Information Gap
Mounting an effective response to a humanitarian crisis depends on high quality and timely information. However, the very nature of such crises makes it a challenge to collect reliable data, particularly in the time scale of days or hours when it is most needed. Given the unprecedented quantities of data now being generated worldwide (e.g. by sensors, satellites, mobile devices, and the usage of digital services), as well as recent advances in the algorithms which can make sense of this raw data, there is significant potential to improve the initial assessment and ongoing monitoring of emergencies. This talk will discuss some of the opportunities and limitations, using examples of work conducted during various natural and man-made emergencies. Dr John Quinn, United Nations Global Pulse, UK
Dr John Quinn, United Nations Global Pulse, UKJohn Quinn is a Data Scientist at UN Global Pulse, dealing primarily with analytics projects in Africa, where he has been technical lead on a number of large-scale initiatives. From 2007 to 2015 he was a faculty member of the Department of Computer Science in Makerere University, Uganda. His research interests are in artificial intelligence and data science, and the application of these to practical problems in the developing world. He received a BA in Computer Science from the University of Cambridge in 2000, and a PhD in machine learning from the University of Edinburgh in 2007. |
|
---|---|---|
09:45 - 10:15 |
Differential privacy and how it compares with legal standard of privacy
Differential privacy is a robust concept of privacy which brings mathematical rigor to the decades-old problem of privacy-preserving analysis of collections of sensitive personal information. Informally, differential privacy requires that the outcome of an analysis would remain stable under any possible change to an individual's information, and hence protects individuals from attackers that try to learn the information particular to them. The subject of much theoretical investigation, differential privacy has recently been making significant strides towards implementation and use. This talk will present differential privacy and discuss how one can reason about how it matches with concepts of privacy appearing in privacy law and regulations. Based on the work of a working group: K Nissim, A Bembenek, A Wood, M. Bun, M Gaboardi, U Gasser, D O'Brien, T Steinke, and S Vadhan. Professor Kobbi Nissim
Professor Kobbi NissimProfessor Kobbi Nissim is McDevitt Chair in Computer Science, Georgetown University. Nissim’s work is focused on the mathematical formulation and understanding of privacy. His work from 2003 and 2004 with Dinur and Dwork initiated rigorous foundational research of privacy and presented a precursor of Differential Privacy - a definition of privacy in computation that he introduced in 2006 with Dwork, McSherry and Smith. His research studies privacy in various contexts, including statistics, computational learning, mechanism design, social networks, and more recently law and policy. Since 2011, Nissim has been involved with the Privacy Tools for Sharing Research Data project at Harvard, developing privacy-preserving tools for the sharing of social-science data. Nissim was awarded the Godel Prize In 2017, the IACR TCC Test of Time Award in 2016, and the ACM PODS Alberto O Mendelzon Test-of-Time Award in 2013. |
|
10:15 - 10:30 | Discussion | |
10:30 - 11:00 | Coffee Break | |
11:00 - 11:30 |
Data science for the public sector
Public sector organisations are increasingly interested in using data science capabilities to deliver policy and generate efficiencies in high uncertainty environments. The long-term success of data science in the public sector relies on successfully embedding it into delivery solutions for policy implementation. This requires organisational innovation and change delivered through structural and cultural adaptation, together with capacity building. Another key factor for success is the contribution of academia and the private and third sector. This talk will discuss the opportunities that exist for using data science in delivering public services at the international and national levels. Professor Slava Mikhaylov
Professor Slava MikhaylovSlava Mikhaylov is a Professor of Public Policy and Data Science at the University of Essex, holding a joint appointment in Department of Government and Computer Science Department Institute for Analytics and Data Science. He is a Chief Scientific Adviser to Essex County Council and a co-investigator in the UK Economic and Social Research Council Big Data infrastructure investment initiative – Consumer Data Research Centre at University College London. His research and teaching is primarily in the field of machine learning and natural language processing. |
|
11:30 - 11:45 | Discussion | |
11:45 - 12:15 |
The automation of political communication on Twitter: the case of the Brexit botnet
Dr Dan Mercea, City, University of London, UK This presentation reports on a network of Twitterbots— automatic posting protocols—comprising 13,493 accounts that tweeted the U.K. E.U. membership referendum, only to disappear from Twitter shortly after the ballot. We compared active users to this set of political bots with respect to temporal tweeting behaviour, the size and speed of retweet cascades, and the composition of their retweet cascades (user-to-bot vs. bot-to-bot) to evidence strategies for bot deployment. Our results move forward the analysis of political bots by showing that Twitterbots can be effective at rapidly generating small to medium-sized cascades; that the retweeted content comprises user-generated hyperpartisan news, which is not strictly fake news, but whose shelf life is remarkably short; and, finally, that a botnet may be organized in specialized tiers or clusters dedicated to replicating either active users or content generated by other bots. |
|
12:30 - 13:30 | Lunch |
Chair
Professor Sofia Olhede, University College London
Professor Sofia Olhede, University College London
Sofia Olhede has been a professor of Statistics since 2007 and one year later was made an honorary professor of computer science at University College London (UCL). She was awarded her PhD in 2003 at Imperial College London, where she was a Lecturer (assistant professor) and Senior Lecturer (associate professor) between 2002 and 2006. She is Director of UCL's Centre for Data Science, and until last year, chair of the Alan Turing Institute’s Science Committee. Sofia served on the UK Royal Society’s Machine Learning Committee, the British Academy and Royal Society Data Governance Project, and is a member of the Personal Data and Individual Access Control section of the The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. She currently holds a European Research Council consolidator fellowship, and previously held a five year UK Engineering and Physical Sciences research council Leadership Fellowship.
13:30 - 14:00 |
Machine learning and genomics: precision medicine vs patient privacy
Machine learning has the potential of major societal impact in computational biology applications. In particular, it plays a central role in the development of precision medicine, whereby treatment is tailored to the clinical or genetic specificities of the patients. However, these advances require collecting and sharing among researchers large amounts of genomic data, which generates much concern about privacy. This talk will review recent trends in both compromising and protecting patient privacy. Dr Chloe-Agathe Azencott, Mines Paris Tech, France
Dr Chloe-Agathe Azencott, Mines Paris Tech, FranceChloe-Agathe Azencott got her PhD in computer science developing machine learning methods for drug discovery at the University of California, Irvine (USA) in 2010. She then spent three years as a research scientist at the Max Planck Institutes for Developmental Biology and Intelligent Systems in Tübingen (Germany). She has been a research scientist at the Centre for Computational Biology of MINES ParisTech and Institut Curie (Paris, France) since 2013. Her research focuses on the development of methods for efficient multi-locus biomarker discovery. In particular she is interested in the incorporation of additional (structured) information, for example as biological networks; in multi-task approaches, where one addresses multiple related problems simultaneously; and in the development of fast but accurate techniques to address these issues. |
|
---|---|---|
14:00 - 14:15 | Discussion | |
14:15 - 14:45 |
Empirical calibration for effect size estimation on observational healthcare studies
Existing health care data promise valuable insights, yet current practice relies on idiosyncratic study designs with unknown operating characteristics and publishing (or not) one estimate at a time. The resulting distribution of estimates shows an over-abundance of ‘statistically significant’ estimates and strong indicators of publication bias. We describe a systematic process for observational research that can be evaluated, calibrated and applied at scale. We demonstrate this new paradigm by comparing all treatments for depression for a set of health outcomes using four large insurance claims databases. We estimate 17,718 hazard ratios, each using methodology on par with current state-of-the-art observational studies. Moreover, we employ negative and positive controls to evaluate and calibrate estimates ensuring, for example, that the 95% confidence interval includes the true effect size approximately 95% of time. Our generated results avoid data fishing and can inform medical decisions. Professor David Madigan, Columbia University, USA
Professor David Madigan, Columbia University, USADavid Madigan serves as the ninth executive vice president for the Arts and Sciences and dean of the faculty, a position he assumed in 2013. He is a professor of statistics at Columbia University, and served as the department chair from 2007 to 2013. Before coming to Columbia in 2007, Professor Madigan was dean of physical and mathematical sciences at Rutgers University. He is a Fellow of the American Statistical Association, the Institute of Mathematical Statistics, and the American Association for the Advancement of Science. He received a bachelor’s degree in Mathematical Sciences and a Ph.D. in Statistics, both from Trinity College Dublin. He has previously worked for AT&T Inc., Soliloquy Inc., the University of Washington, Rutgers University, and SkillSoft, Inc. He has over 170 publications in such areas as Bayesian statistics, text mining, Monte Carlo methods, pharmacovigilance and probabilistic graphical models. |
|
14:45 - 15:00 | Discussion | |
15:00 - 15:30 | Tea Break | |
15:30 - 16:00 | Professor Geraint Rees | |
16:00 - 16:15 | Discussion | |
16:15 - 17:00 | Panel discussion: future directions |