Last month’s International Conference on Machine Learning brought together over 5000 researchers to share recent developments and novel approaches in machine learning.
Amidst the advances in reinforcement learning, new approaches to interpretability, and studies of the security implications of AI presented during the conference, new debates also began to emerge about the future of machine learning research, and the role researchers play in shaping this future by the way they talk about their work.
In this context, one of the conference’s most talked-about papers – ‘Troubling trends in machine learning (PDF)’ – highlighted the “mis-use of language” as a key area of concern for the field. This paper raised questions about how researchers and journalists should talk about AI – does describing AI in terms of human-level performance (a system outperforms lawyers!) or human characteristics (AI has imagination!) help start a conversation, or unhelpfully distort public debate? A recent Guardian article described the dominant narratives about AI as “unhinged”, with a combination of hype, speculation, and genuinely interesting technological advances contributing to AI’s capabilities being misrepresented in public debate. These misrepresentations have the potential to distort how public discourse considers the risks or opportunities associated with AI.
Public awareness of the technologies driving recent advances in AI is low. Extensive public dialogues carried out by the Royal Society in 2016 and 2017 showed that only 9% of those surveyed had heard the term ‘machine learning’, and only 3% felt that they knew a great deal or fair amount about it. However, many more people had heard about the applications of machine learning – 76% of respondents had heard of computers that can recognise speech and answer questions, for example. In these dialogues, participants tended to be more interested in the context surrounding the use of machine learning than the technology itself, asking: why is it being used? For what purpose or in what application? And for whose benefit?
In the absence of widespread public awareness of these technologies, most people’s views about AI will be shaped by the narratives that are part of our shared cultures. The ideas about AI technologies that are pervasive in public consciousness – typically that AI is an embodied, super-human intelligence that looks a lot like the Terminator – are shaped by hundreds of years of stories that people have told about humans and machines, and our places in the world. This cultural hinterland shapes how AI is portrayed in media, culture, and everyday discussion; it influences what societies find concerning – or exciting – about technological developments; and it affects how different publics relate to AI technologies.
So the way we all talk about AI technologies matters – it can direct attention from the public, policymakers, and researchers to (or away from) particular areas of opportunity or concern, and it can influence how societies respond to technological advances. It can enable technological development, or hold it back. Building a well-founded public dialogue about AI technologies will therefore be key to continued public confidence in the systems that deploy AI technologies, and to realising the benefits they promise across sectors.
This type of dialogue is part of the environment of careful stewardship of AI technologies and data use that the Royal Society is working to create. As AI technologies are put to use in a growing range of contexts or applications, continuing engagement between researchers, policymakers, and the public will be important in helping to ensure that the benefits of AI are shared across society.
Since the launch of our machine learning project, The Royal Society has been creating spaces for public discussion about AI technologies, and their implications for society. Our You and AI programme is bringing cutting edge research to a public audience, through a lecture programme delivered by leading thinkers in AI. Through this programme, we want to raise the level of public conversation about AI, its applications, and the implications of these technologies for fairness, equality, and the future of work.
At the same time, our AI narratives project – in partnership with the Leverhulme Centre for the Future of Intelligence – has been examining how society talks about AI and why this matters. As we’ve described on this blog before, this project is exploring the alternative narratives that could give new ideas about AI’s impact on society – such as the new complexities arising from distributed AI – and considering how researchers and communicators can diversify the stories we tell about AI.
Further events in the You and AI series will take place from September to December (find out more) – and we’ll be publishing a write-up of the insights from the AI narratives project in the autumn.