How can science policy serve society? From 16 to 20 February, scientists, policymakers, and members of the public gathered in Boston for the annual meeting of the American Association for the Advancement of Science (AAAS).

People and iPad

As part of our ongoing policy project on machine learning, the Society supported three sessions exploring the societal impact of artificial intelligence (AI), the significance of interpretability in machine learning, and tools for communicating research in AI, machine learning, and robotics. This post gives a flavour of what our panellists discussed in these sessions.

Advances in machine learning

While machine learning systems have been around in various forms for at least several decades, the field of machine learning and AI is particularly exciting at the moment. Better algorithms, access to more data, and more powerful computers, mean that there have been rapid advances in the capabilities of these technologies in recent years, and machine learning systems are now able to perform at a much higher level than in the past.

This increased performance is opening the door to a growing range of machine learning applications. For example, computer vision techniques that are currently being used to recognise faces on social media are being developed to diagnose cancer more accurately, and problem-solving abilities that helped a computer beat the world champion at Go have been used to help save energy in data centres.

David Parkes, Harvard, told our AI, People, and Society session that Stanford’s AI100 study has identified a range of applications for AI in transport, healthcare, education, and entertainment. This diversity of applications was on show across AAAS, with delegates discussing the use of machine learning in policing, predicting disease outbreaks, and diagnosing illnesses.

Machine learning, AI, and society

As machine learning finds uses in a greater range of fields, it raises new questions about its place in society, and how we want to make use of this technology. Our sessions explored some of these questions, including who benefits from machine learning, and how we can work with ‘black box’ systems.

Machine learning and AI promise significant economic benefits. However, as Erik Brynjolfsson, MIT, explained, while progress in AI might grow the “economic pie”, there are open questions about who will benefit from this growth, especially if productivity increases and wage growth are ‘decoupled’. Erik’s talk on AI and employment set out how policy responses to the use of AI can shape who benefits from this technology, with implications for both the economy and society.

A key benefit of machine learning is its ability to analyse vast amounts of data; these systems could generate insights or identify patterns from so-called ‘big data’ that we could not previously have studied. Hanna Wallach, Microsoft Research, set out some of the ethical challenges she deals with when carrying out research in computational social science. For example, machine learning systems may reinforce existing biases in datasets, or create outputs whose errors disproportionately affect certain groups.  In this context, understanding what the system is doing and why is key.

Rich Caruana, Microsoft Research, used an example from his analysis of hospital admissions to demonstrate why it’s important to be able to interpret what a machine learning system is doing. When creating a model to predict the risk of patients developing complications from pneumonia, Rich found a surprising result: a history of asthma reduces a patient’s risk of dying from pneumonia.

Further interrogation showed this seemingly anomalous result came about because doctors tended to admit pneumonia patients with a history of asthma to hospital immediately, meaning they had rapid access to care. Simply assuming these patients were at lower risk on the basis of the model’s outputs would have led to poorer outcomes for this group. Understanding what a model has learned – and why – can therefore be key to making informed decisions about its suitability; Rich didn’t deploy his neural network in this situation, but instead developed a rule-based system that was more interpretable.

And as Anders Sandberg, Future of Humanity Institute, explained, we are usually interested in why a system has done something, not simply what happened; this requires machine learning systems to be able to produce explanations that are concrete, causal, and consistent. Creating such systems is an exciting area of research.

The importance of public engagement

Across these discussions, we heard about the importance of public dialogue about machine learning, and our participants showed the role scientists have to play in this dialogue. In Beyond the hype: tools to demystify robotics and AI, Sabine Hauert explained why it is important for researchers working in robotics and AI to be active in communicating their research to the public. Engaging in this way can both help demystify – or de-hype – these areas of work, while also increasing the impact of individual projects.

Public dialogue has been a key part of the Royal Society’s machine learning project. We’ve been holding public events – including our forthcoming events with the British Academy – as well as holding a public dialogue exercise on machine learning. You will be able to read more about this work in the project report, which will be published in Spring 2017.

For more information about the Royal Society’s work at AAAS, check out our Storify of the event. And for further information about the Royal Society’s policy work on machine learning, check out our Machine Learning webpage.

Authors

  • Jessica Montgomery

    Jessica Montgomery