Skip to content

Overview

Scientific Discussion meeting organised by Professor Sofia Olhede, Professor Patrick Wolfe, Professor Tony McEnery and Professor Neil Lawrence.

The usage of algorithms and analytics in society is exploding: from machine learning recommender systems in commerce, to credit scoring methods outside of standard regulatory practice and self-driving cars. The rapid adoption of new technology has the potential to greatly improve citizens’ experiences, but also poses a number of new challenges. This meeting will highlight opportunities and challenges in this rapidly changing landscape, bringing legal and ethics experts together with technologists to discuss implications, impacts and innovations.

Enquiries: contact the Scientific Programmes Team

Organisers

Schedule


Chair

09:05-09:30
Transparency and Accountability

Speakers


Listen to the audio (mp3)

09:30-09:45
Discussion
09:45-10:15
Algorithmic risk assessment policing models

Abstract

This talk uses Durham Constabulary’s Harm Assessment Risk Tool (HART) as a case-study.  HART is one of the first algorithmic models to be deployed by a UK police force in an operational capacity.  The potential benefits of such tools will be discussed, the concept and method of HART considered and the results of the model’s first validation reviewed.  The talk will critique the use of algorithmic tools within policing from a societal and legal perspective, focusing in particular upon substantive common law grounds for judicial review.  Two linked proposals will be made - a concept of ‘experimental’ proportionality, and a decision-making guidance framework called ‘ALGO-CARE’ – which together could create a model that recognises the need for controlled algorithmic experimentation in the public sector while at the same time acknowledging and carefully managing any risks to individual rights.

Speakers


Listen to the audio (mp3)

10:15-10:30
Discussion
10:30-11:00
Coffee Break
11:00-11:30
Algorithms, ethics and data protection: a regulator's view

Abstract

Abstract to be confirmed

Speakers


Listen to the audio (mp3)

11:30-11:45
Discussion
11:45-12:15
Algorithmic regulation and the Rule of Law

Abstract

This talk will first explore how we distinguish between law and regulation, explaining that regulation must be situated within the contours shaped by the law and the Rule of Law. After this, a specific type of computational law, based on data-driven legal technologies will be discussed. The ensuing artificial legal intelligence enables quantified legal prediction and argumentation mining which are both based on machine learning applications (co-called natural language processing). This will raise the question of whether the implementation of such technologies should count as law or as regulation, and what this means for their further development. The focus will propose the concept of ‘agonistic machine learning’ as a means to bring data-driven regulation under the Rule of Law. This entails obligating developers and users of these technologies to re-introduce adversarial interrogation at the level of the computational architecture. 

Speakers


Listen to the audio (mp3)

12:30-13:30
Lunch

Chair

13:30-14:00
Cat Drew

Listen to the audio (mp3)

14:00-14:15
Discussion
14:15-14:45
How should we think about algorithmic accountability?

Abstract

This talk will suggest that data and AI innovation requires a public licence to operate. Hetan will consider the changing notions of data ethics as technology changes. He will argue that making algorithms 'accountable' will be a key issue in retaining trust and trustworthiness. He will then review different options for this, including transparency, governance, monitoring outcomes. He will also suggest that there are needs to work at a higher level including the creation of professional standards and codes of ethics / conduct for data scientists. He will also discuss the wider regulatory challenges posed in this area and consider what policymakers and regulators should be doing.

Speakers


Listen to the audio (mp3)

14:45-15:00
Discussion
15:00-15:30
Tea Break
15:30-16:00
Algorithms and multi-disciplinary research

Abstract

Speaker: Rebecca Endean OBE, UK Research and Innovation, UK

Abstract to be confirmed


Listen to the audio (mp3)

16:00-16:15
Discussion
16:15-16:45
Transparency and Trust – legal liability for algorithimic decisions

Abstract

Algorithimic decisions can give rise to legal liability, both for causing direct losses (such as in motor vehicle accidents) and for infringing fundamental rights. In either case, the law looks for an explanation of how and why the algorithm made its decision, i.e. for transparency of the decision-making process.

But there is an important difference between ex ante and ex post transparency. The more complex the algorithm, particularly where it derives from machine learning, the more difficult it becomes to provide ex ante transparency. And there is a strong argument that by demanding ex ante transparency the law might limit the improvement of algorithmic decision-making.

This talk explains the principles which should apply in deciding whether ex ante or ex post transparency is sufficient, or indeed whether a complete inability to provide explanations might be permissible. It also attempts to identify how lawmakers should decide between incentivising transparency via liability laws as opposed to mandating transparency through regulation.

Speakers


Listen to the audio (mp3)

16:45-17:00
Discussion

09:00-09:30
Machine Learning and the Humanitarian Information Gap

Abstract

Mounting an effective response to a humanitarian crisis depends on high quality and timely information. However, the very nature of such crises makes it a challenge to collect reliable data, particularly in the time scale of days or hours when it is most needed. Given the unprecedented quantities of data now being generated worldwide (e.g. by sensors, satellites, mobile devices, and the usage of digital services), as well as recent advances in the algorithms which can make sense of this raw data, there is significant potential to improve the initial assessment and ongoing monitoring of emergencies. This talk will discuss some of the opportunities and limitations, using examples of work conducted during various natural and man-made emergencies.

Speakers


Listen to the audio (mp3)

09:45-10:15
Differential privacy and how it compares with legal standard of privacy

Abstract

Differential privacy is a robust concept of privacy which brings mathematical rigor to the decades-old problem of privacy-preserving analysis of collections of sensitive personal information. Informally, differential privacy requires that the outcome of an analysis would remain stable under any possible change to an individual's information, and hence protects individuals from attackers that try to learn the information particular to them. The subject of much theoretical investigation, differential privacy has recently been making significant strides towards implementation and use. 

This talk will present differential privacy and discuss how one can reason about how it matches with concepts of privacy appearing in privacy law and regulations.

Based on the work of a working group: K Nissim, A Bembenek, A Wood, M. Bun, M Gaboardi, U Gasser, D O'Brien, T Steinke, and S Vadhan.

Speakers


Listen to the audio (mp3)

10:15-10:30
Discussion
10:30-11:00
Coffee Break
11:00-11:30
Data science for the public sector

Abstract

Public sector organisations are increasingly interested in using data science capabilities to deliver policy and generate efficiencies in high uncertainty environments. The long-term success of data science in the public sector relies on successfully embedding it into delivery solutions for policy implementation. This requires organisational innovation and change delivered through structural and cultural adaptation, together with capacity building. Another key factor for success is the contribution of academia and the private and third sector. This talk will discuss the opportunities that exist for using data science in delivering public services at the international and national levels.

Speakers


Listen to the audio (mp3)

11:30-11:45
Discussion
11:45-12:15
The automation of political communication on Twitter: the case of the Brexit botnet

Abstract

Dr Dan Mercea, City, University of London, UK

This presentation reports on a network of Twitterbots— automatic posting protocols—comprising 13,493 accounts that tweeted the U.K. E.U. membership referendum, only to disappear from Twitter shortly after the ballot. We compared active users to this set of political bots with respect to temporal tweeting behaviour, the size and speed of retweet cascades, and the composition of their retweet cascades (user-to-bot vs. bot-to-bot) to evidence strategies for bot deployment. Our results move forward the analysis of political bots by showing that Twitterbots can be effective at rapidly generating small to medium-sized cascades; that the retweeted content comprises user-generated hyperpartisan news, which is not strictly fake news, but whose shelf life is remarkably short; and, finally, that a botnet may be organized in specialized tiers or clusters dedicated to replicating either active users or content generated by other bots.


Listen to the audio (mp3)

12:30-13:30
Lunch

Chair

13:30-14:00
Machine learning and genomics: precision medicine vs patient privacy

Abstract

Machine learning has the potential of major societal impact in computational biology applications. In particular, it plays a central role in the development of precision medicine, whereby treatment is tailored to the clinical or genetic specificities of the patients. However, these advances require collecting and sharing among researchers large amounts of genomic data, which generates much concern about privacy. This talk will review recent trends in both compromising and protecting patient privacy.

Speakers


Listen to the audio (mp3)

14:00-14:15
Discussion
14:15-14:45
Empirical calibration for effect size estimation on observational healthcare studies

Abstract

Existing health care data promise valuable insights, yet current practice relies on idiosyncratic study designs with unknown operating characteristics and publishing (or not) one estimate at a time. The resulting distribution of estimates shows an over-abundance of ‘statistically significant’ estimates and strong indicators of publication bias. We describe a systematic process for observational research that can be evaluated, calibrated and applied at scale. We demonstrate this new paradigm by comparing all treatments for depression for a set of health outcomes using four large insurance claims databases. We estimate 17,718 hazard ratios, each using methodology on par with current state-of-the-art observational studies. Moreover, we employ negative and positive controls to evaluate and calibrate estimates ensuring, for example, that the 95% confidence interval includes the true effect size approximately 95% of time. Our generated results avoid data fishing and can inform medical decisions.

Speakers


Listen to the audio (mp3)

14:45-15:00
Discussion
15:00-15:30
Tea Break
15:30-16:00
Professor Geraint Rees
16:00-16:15
Discussion
16:15-17:00
Panel discussion: future directions