Skip to content
What's on

The growing ubiquity of algorithms in society: implications, impacts and innovations

Scientific meeting

Location

The Royal Society, London, 6-9 Carlton House Terrace, London, SW1Y 5AG

Overview

Scientific Discussion meeting organised by Professor Sofia Olhede, Professor Patrick Wolfe, Professor Tony McEnery and Professor Neil Lawrence.

The usage of algorithms and analytics in society is exploding: from machine learning recommender systems in commerce, to credit scoring methods outside of standard regulatory practice and self-driving cars. The rapid adoption of new technology has the potential to greatly improve citizens’ experiences, but also poses a number of new challenges. This meeting will highlight opportunities and challenges in this rapidly changing landscape, bringing legal and ethics experts together with technologists to discuss implications, impacts and innovations.

More information on the speakers and programme will be available soon. Recorded audio of the presentations will be available on this page after the meeting has taken place.

Poster session

There will be a poster session at 17:00 on Monday 30 October 2017. If you would like to apply to present a poster please submit your proposed title, abstract (no more than 200 word and in third person), author list, name of the proposed presenter and institution to the Scientific Programmes team no later than Friday 13 October 2017. Please note that places are limited and posters are selected at the scientific organiser's discretion. Poster abstracts will only be considered if the presenter is registered to attend the meeting.

Attending this event

This meeting is intended for researchers in relevant fields.

  • Free to attend
  • Limited places, advanced registration is essential
  • An optional lunch can be purchased during registration

Enquiries: contact the Scientific Programmes Team

Event organisers

Select an organiser for more information

Schedule of talks

30 October

Session 1 09:00-12:30

Machine learning and the law

4 talks Show detail Hide detail

Chairs

Professor Tony McEnery, Economic and Social Research Council, UK

09:05-09:30 Transparency and Accountability

Christina Blacklaws, The Law Society of England and Wales, UK

Show speakers

09:30-09:45 Discussion

09:45-10:15 Algorithmic risk assessment policing models

Dr Marion Oswald

Abstract

This talk uses Durham Constabulary’s Harm Assessment Risk Tool (HART) as a case-study.  HART is one of the first algorithmic models to be deployed by a UK police force in an operational capacity.  The potential benefits of such tools will be discussed, the concept and method of HART considered and the results of the model’s first validation reviewed.  The talk will critique the use of algorithmic tools within policing from a societal and legal perspective, focusing in particular upon substantive common law grounds for judicial review.  Two linked proposals will be made - a concept of ‘experimental’ proportionality, and a decision-making guidance framework called ‘ALGO-CARE’ – which together could create a model that recognises the need for controlled algorithmic experimentation in the public sector while at the same time acknowledging and carefully managing any risks to individual rights.

Show speakers

10:15-10:30 Discussion

10:30-11:00 Coffee Break

11:00-11:30 Maths, privacy, ethics: what does it mean for people?

Iain Bourne, Information Comissioner's Office, UK

Abstract

This talk will look at the intersection of science and information rights, beneath the umbrella of a move towards the development of clearer ethical standards for data scientists and others. It will explore individuals’ rights in relation to increasingly sophisticated and little-understood information systems, and will consider some possible means of bridging a potentially widening rift between citizens and the organisations that collect and use information about them. Alongside this the talk will make a realistic appraisal of ‘the deal’ that people want in respect of their personal information. What degree of control do they want? What do they want to know about the systems that process their personal information?

Show speakers

11:30-11:45 Discussion

11:45-12:15 Algorithmic regulation and the Rule of Law

Professor Mireille Hildebrandt, Vrije Universiteit Brussel

Abstract

This talk will first explore how we distinguish between law and regulation, explaining that regulation must be situated within the contours shaped by the law and the Rule of Law. After this, a specific type of computational law, based on data-driven legal technologies will be discussed. The ensuing artificial legal intelligence enables quantified legal prediction and argumentation mining which are both based on machine learning applications (co-called natural language processing). This will raise the question of whether the implementation of such technologies should count as law or as regulation, and what this means for their further development. The focus will propose the concept of ‘agonistic machine learning’ as a means to bring data-driven regulation under the Rule of Law. This entails obligating developers and users of these technologies to re-introduce adversarial interrogation at the level of the computational architecture. 

Show speakers

12:30-13:30 Lunch

Session 2 13:30-17:00

Algorithms, from regulation to privacy and trust

4 talks Show detail Hide detail

Chairs

Professor Patrick J Wolfe

13:30-14:00 Cat Drew

14:00-14:15 Discussion

14:15-14:45 How should we think about algorithmic accountability?

Hetan Shah, Royal Statistical Society, UK

Abstract

This talk will suggest that data and AI innovation requires a public licence to operate. Hetan will consider the changing notions of data ethics as technology changes. He will argue that making algorithms 'accountable' will be a key issue in retaining trust and trustworthiness. He will then review different options for this, including transparency, governance, monitoring outcomes. He will also suggest that there are needs to work at a higher level including the creation of professional standards and codes of ethics / conduct for data scientists. He will also discuss the wider regulatory challenges posed in this area and consider what policymakers and regulators should be doing.

Show speakers

14:45-15:00 Discussion

15:00-15:30 Tea Break

15:30-16:00 Speaker to be confirmed

16:00-16:15 Discussion

16:15-16:45 Transparency and Trust – legal liability for algorithimic decisions

Professor Chris Reed

Abstract

Algorithimic decisions can give rise to legal liability, both for causing direct losses (such as in motor vehicle accidents) and for infringing fundamental rights. In either case, the law looks for an explanation of how and why the algorithm made its decision, i.e. for transparency of the decision-making process.

But there is an important difference between ex ante and ex post transparency. The more complex the algorithm, particularly where it derives from machine learning, the more difficult it becomes to provide ex ante transparency. And there is a strong argument that by demanding ex ante transparency the law might limit the improvement of algorithmic decision-making.

This talk explains the principles which should apply in deciding whether ex ante or ex post transparency is sufficient, or indeed whether a complete inability to provide explanations might be permissible. It also attempts to identify how lawmakers should decide between incentivising transparency via liability laws as opposed to mandating transparency through regulation.

Show speakers

16:45-17:00 Discussion

31 October

Session 3 09:00-12:30

Algorithms with societal impact

4 talks Show detail Hide detail

09:00-09:30 Machine Learning and the Humanitarian Information Gap

Dr John Quinn, United Nations Global Pulse, UK

Abstract

Mounting an effective response to a humanitarian crisis depends on high quality and timely information. However, the very nature of such crises makes it a challenge to collect reliable data, particularly in the time scale of days or hours when it is most needed. Given the unprecedented quantities of data now being generated worldwide (e.g. by sensors, satellites, mobile devices, and the usage of digital services), as well as recent advances in the algorithms which can make sense of this raw data, there is significant potential to improve the initial assessment and ongoing monitoring of emergencies. This talk will discuss some of the opportunities and limitations, using examples of work conducted during various natural and man-made emergencies.

Show speakers

09:45-10:15 Differential privacy and how it compares with legal standard of privacy

Professor Kobbi Nissim

Abstract

Differential privacy is a robust concept of privacy which brings mathematical rigor to the decades-old problem of privacy-preserving analysis of collections of sensitive personal information. Informally, differential privacy requires that the outcome of an analysis would remain stable under any possible change to an individual's information, and hence protects individuals from attackers that try to learn the information particular to them. The subject of much theoretical investigation, differential privacy has recently been making significant strides towards implementation and use. 

This talk will present differential privacy and discuss how one can reason about how it matches with concepts of privacy appearing in privacy law and regulations.

Based on the work of a working group: K Nissim, A Bembenek, A Wood, M. Bun, M Gaboardi, U Gasser, D O'Brien, T Steinke, and S Vadhan.

Show speakers

10:15-10:30 Discussion

10:30-11:00 Coffee Break

11:00-11:30 Professor Ken Benoit

11:30-11:45 Discussion

11:45-12:15 Title to be confirmed

12:30-13:30 Lunch

Session 4 13:30-17:00

Data analytics for human health

3 talks Show detail Hide detail

Chairs

Professor Sofia Olhede, University College London

13:30-14:00 Machine learning and genomics: precision medicine vs patient privacy

Dr Chloe-Agathe Azencott, Mines Paris Tech, France

Abstract

Machine learning has the potential of major societal impact in computational biology applications. In particular, it plays a central role in the development of precision medicine, whereby treatment is tailored to the clinical or genetic specificities of the patients. However, these advances require collecting and sharing among researchers large amounts of genomic data, which generates much concern about privacy. This talk will review recent trends in both compromising and protecting patient privacy.

Show speakers

14:00-14:15 Discussion

14:15-14:45 Empirical calibration for effect size estimation on observational healthcare studies

Professor David Madigan, Columbia University, USA

Abstract

Existing health care data promise valuable insights, yet current practice relies on idiosyncratic study designs with unknown operating characteristics and publishing (or not) one estimate at a time. The resulting distribution of estimates shows an over-abundance of ‘statistically significant’ estimates and strong indicators of publication bias. We describe a systematic process for observational research that can be evaluated, calibrated and applied at scale. We demonstrate this new paradigm by comparing all treatments for depression for a set of health outcomes using four large insurance claims databases. We estimate 17,718 hazard ratios, each using methodology on par with current state-of-the-art observational studies. Moreover, we employ negative and positive controls to evaluate and calibrate estimates ensuring, for example, that the 95% confidence interval includes the true effect size approximately 95% of time. Our generated results avoid data fishing and can inform medical decisions.

Show speakers

14:45-15:00 Discussion

15:00-15:30 Tea Break

15:30-16:00 Professor Geraint Rees

16:00-16:15 Discussion

The growing ubiquity of algorithms in society: implications, impacts and innovations The Royal Society, London 6-9 Carlton House Terrace London SW1Y 5AG UK