Earlier in November, I joined Fellows, politicians and global leaders in technology and computer science at the UK’s AI Safety Summit at Bletchley Park.
This meeting has been the focus of media interest around the world, much of it emphasising risks that may, or may not, emerge from AI’s development. For the public this concerning possibility may seem to have leapt from obscurity onto the global stage without warning.
However, as with so many innovations, AI has only reached this point after decades of research, development, and use by researchers and technology companies. A gear change came this year when Chat GPT suddenly put powerful AI tools in every pocket. It was swiftly followed by alarm bells over existential risk being sounded by senior members of the field – including some of the Society’s own Fellows.
AI’s vast possibilities mean it has long been a focus for the Royal Society. Within research alone, its potential to transform the speed and scale of scientific discovery is huge. One example, DeepMind’s AlphaFold, revolutionised our ability to predict protein structures – seemingly overnight. As the technology matures, its influence will extend to some of the biggest challenges of our time, from disease prevention to mitigating climate change.
But as the UK’s national academy of sciences, we are also engaged in ensuring that society is prepared for this new age of AI – and the potential risks it brings. While much attention is focused on existential risks to humanity, we should be careful to separate reality from the hype. That hype risks distracting from challenges that we know need to be addressed, from preserving digital privacy, to preventing bias in decision-making tools, and understanding AI’s potential impacts on online misinformation or the world of work.
The Society’s work on these topics, has begun to set out some of these near-term challenges, from policy priorities around the use of Privacy Enhancing Technologies and the vulnerabilities of The Online Information Environment, through to the You and AI lecture series, which aimed to engage members of the public around the UK in the debate.
It was also the reason why, in the week before the Bletchley summit, we brought together ministers, scientists and technology experts at the Royal Society for our own Science x AI Safety meeting. Delegates, including Secretary of State Michelle Donelan and Royal Society Vice President, Professor Alison Noble, identified ‘near-term’ risks and the likelihood that they will come to pass, and this official pre-summit event fed into discussions at the Bletchley meeting. It will also inform the Society’s upcoming Science in the Age of AI report.
Finding ways to improve AI safety and assure the safeguards that have to be put in place was a major theme of the Government’s summit. So it was great that the afternoon session of our meeting brought together health and climate science post-graduate students in a “red teaming” exercise, aimed at testing the guard rails of AI large language models. It was run by Dr Rumman Chowdhury and Jutta Williams of Humane Intelligence and informed by our report on misinformation online. The exercise showed that, in a few hours using Meta’s open-source Llama 2 model, it was possible to produce maliciously false, and misleading scientific information about climate change and COVID-19 which could (in a real-world scenario) easily be spread online.
This timely exercise showed the value of scientists and policy makers engaging with technology companies to understand the impacts of their models on societal issues. It was, therefore, gratifying to see President Biden’s executive order, on the safe, secure and trustworthy development and use of AI, focus on the importance of red teaming and privacy enhancing technologies.
The Bletchley Declaration, which was agreed by the UK and 28 other countries represented at the summit, will be a foundational document in this shared global effort to ensure AI is used for global good. The UK has made steps of its own, with the commitment to establish the first national AI Safety Institute, announced by UK Prime Minister Rishi Sunak in a speech at the Royal Society’s home in London. However, it is the future summits in South Korea and France where AI’s second act begins and much of the hard work of turning these principles into action remains ahead.
There is real cause for optimism about where we are heading. The Society’s Science in the Age of AI report and an upcoming project on disability data and assistive technologies will set out some of these possible futures in more detail. We will continue to work with policy makers and the public on the architecture needed to deliver this vision.
While the reality of AI’s threats and degree of sophistication may not quite be on par with the pages of science fiction, the widespread adoption of this technology means that we all have a part to play in its responsible development and regulation.