Royal Society cautions against censorship of scientific misinformation online

19 January 2022

Governments and social media platforms should not rely on content removal for combatting harmful scientific misinformation online, a report by the Royal Society, the UK’s national academy of science, has said.

The Online Information Environment report also warns that the UK Government’s upcoming Online Safety Bill focuses on harms to individuals while failing to recognise the wider ‘societal harms’ that misinformation can cause. Misinformation about scientific issues, from vaccine safety to climate change, can cause harm to individuals and society at large. 

The report says there is little evidence that calls for major platforms to remove offending content will limit scientific misinformation’s harms and warns such measures could even drive it to harder-to-address corners of the internet and exacerbate feelings of distrust in authorities.

It recommends wide-ranging measures that governments, tech platforms and academic institutions can take to build resilience to misinformation and a healthy online information environment.

Professor Frank Kelly FRS, Professor of the Mathematics of Systems at the Statistical Laboratory, University of Cambridge, and Chair of the report said, "Science stands on the edge of error and the nature of the scientific endeavour at the frontiers means there is always uncertainty.

"In the early days of the pandemic, science was too often painted as absolute and somehow not to be trusted when it corrects itself, but that prodding and testing of received wisdom is integral to the advancement of science, and society.

"This is important to bear in mind when we are looking to limit scientific misinformation’s harms to society. Clamping down on claims outside the consensus may seem desirable, but it can hamper the scientific process and force genuinely malicious content underground."

The report defines scientific "misinformation" as content which is presented as fact but counter to, or refuted by, the scientific consensus - and includes concepts such as ‘disinformation’ which relates to the deliberate sharing of misinformation content. 

While the internet has led to a proliferation of misinformation, its impact on public understanding to date is less clear. 

The vast majority of British respondents to a YouGov poll, commissioned for the report, agreed that COVID-19 vaccines are safe and that human activity is changing the climate - around one in 20 disputed the scientific position. 

This group who dispute the science, although small, can be influential. They also express a range of motivations for sharing misinformation, from altruistic concerns to profit or political motivations, and are unlikely to be addressed by any single intervention. 

Instead, the report recommends a range of measures for policy makers, online platforms and others to understand and limit misinformation’s harms, including:

  • Supporting media plurality and independent fact-checking – A robust, diverse, independent news media benefits science and public understanding. Policies which threaten its sustainability, including algorithms that determine outlets’ trustworthiness or position in social media feeds, should be carefully scrutinised.  Long-term sustainable funding is also needed to support independent fact-checking organisations that play a vital role in a healthy online information environment. 
  • Monitoring and mitigating evolving sources of scientific misinformation online – Interventions to stop harmful disinformation spreading on private forums or direct messaging services without breaching encryption could include sharing limits or technologies that help users verify the validity of messages or images; monitoring "fringe" social media platforms, which are not currently a focus in the UK’s draft Online Safety Bill.
  • Investing in lifelong information literacy – Education on digital literacy should not be limited to schools and colleges. Older adults are more likely to be targeted by, and be susceptible to, online misinformation. 

Professor Gina Neff, Professor of Technology & Society at the Oxford Internet Institute, and Executive Director at the Minderoo Centre for Technology and Democracy, University of Cambridge, and a member of the report’s working group said, "Scientific misinformation doesn’t just affect individuals, it can harm society and even future generations if allowed to spread unchecked.

"Our polling showed that people have complex reasons for sharing misinformation, and we won’t change this by giving them more facts.

"We need new strategies to ensure high quality information can compete in the online attention economy. This means investing in lifelong information literacy programmes, provenance enhancing technologies, and mechanisms for data sharing between platforms and researchers."

Dr Vint Cerf ForMemRS, Vice President and Chief Internet Evangelist at Google and a member of the report’s working group said, “Technology plays a big role in shaping our information environment and, as this report makes clear, it has a part to play in tackling scientific misinformation as well.

"Many technology platforms already use tools like demonetisation, regulating the use of recommendation algorithms and fact-check labels to reduce the harms of scientific misinformation without censoring debate.

"Misinformation is a complex problem. Technology, governments, science institutions, educators and the public all have a part to play in assuring the quality of scientific information that underpins so much of our day-to-day lives."