This page is archived

Links to external sources may no longer work as intended. The content may not represent the latest thinking in this area or the Society’s current position on the topic.

The growing ubiquity of algorithms in society: implications, impacts and innovations

30 - 31 October 2017 09:00 - 17:00

Scientific Discussion meeting organised by Professor Sofia Olhede, Professor Patrick Wolfe, Professor Tony McEnery and Professor Neil Lawrence.

The usage of algorithms and analytics in society is exploding: from machine learning recommender systems in commerce, to credit scoring methods outside of standard regulatory practice and self-driving cars. The rapid adoption of new technology has the potential to greatly improve citizens’ experiences, but also poses a number of new challenges. This meeting will highlight opportunities and challenges in this rapidly changing landscape, bringing legal and ethics experts together with technologists to discuss implications, impacts and innovations.

Enquiries: contact the Scientific Programmes Team

Organisers

  • Professor Sofia Olhede, University College London

    Sofia Olhede has been a professor of Statistics since 2007 and one year later was made an honorary professor of computer science at University College London (UCL). She was awarded her PhD in 2003 at Imperial College London, where she was a Lecturer (assistant professor) and Senior Lecturer (associate professor) between 2002 and 2006. She is Director of UCL's Centre for Data Science, and until last year, chair of the Alan Turing Institute’s Science Committee. Sofia served on the UK Royal Society’s Machine Learning Committee, the British Academy and Royal Society Data Governance Project, and is a member of the Personal Data and Individual Access Control section of the The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. She currently holds a European Research Council consolidator fellowship, and previously held a five year UK Engineering and Physical Sciences research council Leadership Fellowship.

  • Professor Neil Lawrence, Sheffield University, UK

    Neil Lawrence leads Amazon Research Cambridge where he is a Director of Machine Learning. He is on leave of absence from the University of Sheffield where he was a Professor in Computational Biology and Machine Learning jointly appointed across the Departments of Neuroscience and Computer Science. Neil’s main research interest is machine learning through probabilistic models. He focuses on both the algorithmic side of these models and their application. He has a particular focus on applications in personalised health and computational biology, but happily dabbles in other areas such as speech, vision and graphics.

  • Patrick J. Wolfe

    Professor Patrick J Wolfe

    Patrick J. Wolfe is a professor of statistics and computer science and EPSRC established career fellow in the Mathematical Sciences at University College London. He joined the faculty of University College London in 2012 after teaching and Cambridge and then Harvard and is the founding director of UCL’s Big Data Institute. Professor Wolfe is also a trustee and non-executive director of the Alan Turing Institute, the United Kingdom’s new national institute for data science, where he has played a leading role in establishing the institute and shaping its priorities through an extensive program of engagement with a diverse range of experts and stakeholders. A past recipient of the Presidential Early Career Award for Scientists and Engineers from the White House while at Harvard, he has provided expert advice on applications of data science to policy, societal, and commercial challenges, including to the U.S. and U.K. governments and to a range of public and private bodies. Professor Wolfe has recently been appointed Dean of the College of Science at Purdue University.

  • Professor Tony McEnery, Economic and Social Research Council, UK

    Professor Tony McEnery has been appointed as the Economic and Social Research Council’s (ESRC) new Research Director. Professor McEnery is Distinguished Professor of English Language and Linguistics at Lancaster University and Director of the ESRC Centre for Corpus Approaches to Social Science (CASS). His work at CASS has been focused on bringing linguistic analysis to bear on a number of interdisciplinary contexts, focussing on topics as diverse as climate change, Islamophobia, medical communication and poverty. In February, Professor McEnery received the Queen’s Anniversary Prize at Buckingham Palace on behalf of CASS, for its work in 'computer analysis of world languages in print, speech, and online'. Professor McEnery has worked with scholars from a broad range of subjects, including accountancy, criminology, international relations, religious studies and sociology. He has also worked with an array of impact partners including British Telecom, the Department of Culture, Media and Sport, the Environment Agency, the Home Office, IBM and Research in Motion. Professor McEnery was also Dean of Arts and Social Sciences at Lancaster, and before that Director of Research at the Arts and Humanities Research Council. He joined ESRC as its Director of Research, on secondment from Lancaster University, in October 2016.

Schedule

Chair

Professor Tony McEnery, Economic and Social Research Council, UK

09:05 - 09:30 Transparency and Accountability

Christina Blacklaws, The Law Society of England and Wales, UK

09:30 - 09:45 Discussion
09:45 - 10:15 Algorithmic risk assessment policing models

This talk uses Durham Constabulary’s Harm Assessment Risk Tool (HART) as a case-study.  HART is one of the first algorithmic models to be deployed by a UK police force in an operational capacity.  The potential benefits of such tools will be discussed, the concept and method of HART considered and the results of the model’s first validation reviewed.  The talk will critique the use of algorithmic tools within policing from a societal and legal perspective, focusing in particular upon substantive common law grounds for judicial review.  Two linked proposals will be made - a concept of ‘experimental’ proportionality, and a decision-making guidance framework called ‘ALGO-CARE’ – which together could create a model that recognises the need for controlled algorithmic experimentation in the public sector while at the same time acknowledging and carefully managing any risks to individual rights.

Dr Marion Oswald

10:15 - 10:30 Discussion
10:30 - 11:00 Coffee Break
11:00 - 11:30 Algorithms, ethics and data protection: a regulator's view

Abstract to be confirmed

Carl Wiper

11:30 - 11:45 Discussion
11:45 - 12:15 Algorithmic regulation and the Rule of Law

This talk will first explore how we distinguish between law and regulation, explaining that regulation must be situated within the contours shaped by the law and the Rule of Law. After this, a specific type of computational law, based on data-driven legal technologies will be discussed. The ensuing artificial legal intelligence enables quantified legal prediction and argumentation mining which are both based on machine learning applications (co-called natural language processing). This will raise the question of whether the implementation of such technologies should count as law or as regulation, and what this means for their further development. The focus will propose the concept of ‘agonistic machine learning’ as a means to bring data-driven regulation under the Rule of Law. This entails obligating developers and users of these technologies to re-introduce adversarial interrogation at the level of the computational architecture. 

SAMSUNG CSC

Professor Mireille Hildebrandt, Vrije Universiteit Brussel

12:30 - 13:30 Lunch

Chair

Patrick J. Wolfe

Professor Patrick J Wolfe

13:30 - 14:00 Cat Drew
14:00 - 14:15 Discussion
14:15 - 14:45 How should we think about algorithmic accountability? This talk will suggest that data and AI innovation requires a public licence to operate. Hetan will consider the changing notions of data ethics as technology changes. He will argue that making algorithms 'accountable' will be a key issue in retaining trust and trustworthiness. He will then review different options for this, including transparency, governance, monitoring outcomes. He will also suggest that there are needs to work at a higher level including the creation of professional standards and codes of ethics / conduct for data scientists. He will also discuss the wider regulatory challenges posed in this area and consider what policymakers and regulators should be doing.

Hetan Shah, Royal Statistical Society, UK

14:45 - 15:00 Discussion
15:00 - 15:30 Tea Break
15:30 - 16:00 Algorithms and multi-disciplinary research

Speaker: Rebecca Endean OBE, UK Research and Innovation, UK

Abstract to be confirmed

16:00 - 16:15 Discussion
16:15 - 16:45 Transparency and Trust – legal liability for algorithimic decisions

Algorithimic decisions can give rise to legal liability, both for causing direct losses (such as in motor vehicle accidents) and for infringing fundamental rights. In either case, the law looks for an explanation of how and why the algorithm made its decision, i.e. for transparency of the decision-making process.

But there is an important difference between ex ante and ex post transparency. The more complex the algorithm, particularly where it derives from machine learning, the more difficult it becomes to provide ex ante transparency. And there is a strong argument that by demanding ex ante transparency the law might limit the improvement of algorithmic decision-making.

This talk explains the principles which should apply in deciding whether ex ante or ex post transparency is sufficient, or indeed whether a complete inability to provide explanations might be permissible. It also attempts to identify how lawmakers should decide between incentivising transparency via liability laws as opposed to mandating transparency through regulation.

Professor Chris Reed

16:45 - 17:00 Discussion
09:00 - 09:30 Machine Learning and the Humanitarian Information Gap Mounting an effective response to a humanitarian crisis depends on high quality and timely information. However, the very nature of such crises makes it a challenge to collect reliable data, particularly in the time scale of days or hours when it is most needed. Given the unprecedented quantities of data now being generated worldwide (e.g. by sensors, satellites, mobile devices, and the usage of digital services), as well as recent advances in the algorithms which can make sense of this raw data, there is significant potential to improve the initial assessment and ongoing monitoring of emergencies. This talk will discuss some of the opportunities and limitations, using examples of work conducted during various natural and man-made emergencies.

Dr John Quinn, United Nations Global Pulse, UK

09:45 - 10:15 Differential privacy and how it compares with legal standard of privacy

Differential privacy is a robust concept of privacy which brings mathematical rigor to the decades-old problem of privacy-preserving analysis of collections of sensitive personal information. Informally, differential privacy requires that the outcome of an analysis would remain stable under any possible change to an individual's information, and hence protects individuals from attackers that try to learn the information particular to them. The subject of much theoretical investigation, differential privacy has recently been making significant strides towards implementation and use. 

This talk will present differential privacy and discuss how one can reason about how it matches with concepts of privacy appearing in privacy law and regulations.

Based on the work of a working group: K Nissim, A Bembenek, A Wood, M. Bun, M Gaboardi, U Gasser, D O'Brien, T Steinke, and S Vadhan.

Professor Kobbi Nissim

10:15 - 10:30 Discussion
10:30 - 11:00 Coffee Break
11:00 - 11:30 Data science for the public sector

Public sector organisations are increasingly interested in using data science capabilities to deliver policy and generate efficiencies in high uncertainty environments. The long-term success of data science in the public sector relies on successfully embedding it into delivery solutions for policy implementation. This requires organisational innovation and change delivered through structural and cultural adaptation, together with capacity building. Another key factor for success is the contribution of academia and the private and third sector. This talk will discuss the opportunities that exist for using data science in delivering public services at the international and national levels.

Professor Slava Mikhaylov

11:30 - 11:45 Discussion
11:45 - 12:15 The automation of political communication on Twitter: the case of the Brexit botnet

Dr Dan Mercea, City, University of London, UK

This presentation reports on a network of Twitterbots— automatic posting protocols—comprising 13,493 accounts that tweeted the U.K. E.U. membership referendum, only to disappear from Twitter shortly after the ballot. We compared active users to this set of political bots with respect to temporal tweeting behaviour, the size and speed of retweet cascades, and the composition of their retweet cascades (user-to-bot vs. bot-to-bot) to evidence strategies for bot deployment. Our results move forward the analysis of political bots by showing that Twitterbots can be effective at rapidly generating small to medium-sized cascades; that the retweeted content comprises user-generated hyperpartisan news, which is not strictly fake news, but whose shelf life is remarkably short; and, finally, that a botnet may be organized in specialized tiers or clusters dedicated to replicating either active users or content generated by other bots.

12:30 - 13:30 Lunch

Chair

Professor Sofia Olhede, University College London

13:30 - 14:00 Machine learning and genomics: precision medicine vs patient privacy

Machine learning has the potential of major societal impact in computational biology applications. In particular, it plays a central role in the development of precision medicine, whereby treatment is tailored to the clinical or genetic specificities of the patients. However, these advances require collecting and sharing among researchers large amounts of genomic data, which generates much concern about privacy. This talk will review recent trends in both compromising and protecting patient privacy.

Dr Chloe-Agathe Azencott, Mines Paris Tech, France

14:00 - 14:15 Discussion
14:15 - 14:45 Empirical calibration for effect size estimation on observational healthcare studies Existing health care data promise valuable insights, yet current practice relies on idiosyncratic study designs with unknown operating characteristics and publishing (or not) one estimate at a time. The resulting distribution of estimates shows an over-abundance of ‘statistically significant’ estimates and strong indicators of publication bias. We describe a systematic process for observational research that can be evaluated, calibrated and applied at scale. We demonstrate this new paradigm by comparing all treatments for depression for a set of health outcomes using four large insurance claims databases. We estimate 17,718 hazard ratios, each using methodology on par with current state-of-the-art observational studies. Moreover, we employ negative and positive controls to evaluate and calibrate estimates ensuring, for example, that the 95% confidence interval includes the true effect size approximately 95% of time. Our generated results avoid data fishing and can inform medical decisions.

Professor David Madigan, Columbia University, USA

14:45 - 15:00 Discussion
15:00 - 15:30 Tea Break
15:30 - 16:00 Professor Geraint Rees
16:00 - 16:15 Discussion
16:15 - 17:00 Panel discussion: future directions