Executive summary

The internet has transformed the way people consume, produce, and disseminate information about the world. In the online information environment, internet users can tailor unlimited content to their own needs and desires. This shift away from limited, gatekept, and pre-scheduled content has democratised access to knowledge and driven societal progress. The COVID-19 pandemic exemplifies this, with global researchers collaborating virtually across borders to mitigate the harms of the disease and vaccinate populations.

The unlimited volume of content, however, means that capturing attention in the online information environment is difficult and highly competitive. This heightened competition for attention presents a challenge for those who wish to communicate trustworthy information to help guide important decisions. The poor navigation or, even, active exploitation of this environment by prominent public figures and political leaders has, on many occasions, led to detrimental advice being disseminated amongst the public. This challenge has caused significant concern with online ‘misinformation’ content being widely discussed as a factor which impacts democratic elections and incites violence. In recent years, misinformation has also been identified as a challenge in relation to a range of scientific topics, including vaccine safety, climate change, and the rollout of 5G technology.

The Royal Society’s mission is to promote excellence in science and support its use for the benefit of humanity. The consumption and production of online scientific information is, therefore, of great interest. This report, The online information environment, provides an overview of how the internet has changed, and continues to change, the way society engages with scientific information, and how it may be affecting people’s decision-making behaviour – from taking up vaccines to responding to evidence on climate change. It highlights key challenges for creating a healthy online information environment and makes a series of recommendations for policymakers, academics, and online platforms.

These recommendations, when taken together, are intended to help build collective resilience to harmful misinformation content and ensure access to high quality information on both public and private forums.

The report has been guided by a working group of leading experts in this field and informed by a series of activities commissioned by the Royal Society. Firstly, literature reviews were commissioned on historical examples of scientific misinformation; the evidence surrounding echo chambers, filter bubbles, and polarisation; and the effects of information on individuals and groups. Secondly, the Society hosted various workshops and roundtables with prominent academics, fact-checking organisations, and online platforms. Finally, two surveys were commissioned – the first on people’s attitudes and behaviours towards online scientific misinformation and the second on people’s ability to detect deepfake video content.

The chapters of the report are focused on understanding and explaining essential aspects of the online information environment. They explore a broad range of topics including the ways our minds process information and how this is impacted by accessing information online; how information is generated in a digital context and the role of incentives for content production; and types of synthetic online content and their potential uses, both benign and malicious. However, there are important areas that are not covered in this report, outlined in box 1, which are part of the wider questions around trust in science, in the internet and in institutions. These include the role of traditional science communicators and the wider research community in enabling access to trustworthy information; the issue of online anonymity; and the impact that the online information environment can have on democracy and political events (eg elections).

Within this report, ‘scientific misinformation’ is defined as information which is presented as factually true but directly counters, or is refuted by, established scientific consensus. This usage includes concepts such as ‘disinformation’ which relates to the deliberate sharing of misinformation content.

Key findings

  • Although misinformation content is prevalent online, the extent of its impact is questionable (footnote 1). For example, the Society’s survey of members of the British public (footnote 2) found that the vast majority of respondents believe the COVID-19 vaccines are safe, that human activity is responsible for climate change, and that 5G technology is not harmful. The majority believe the internet has improved the public’s understanding of science, report that they are likely to fact-check suspicious scientific claims they read online and state that they feel confident to challenge their friends and family on scientific misinformation.
  • The existence of echo chambers (where people encounter information that reinforces their own beliefs, online and offline) is less widespread than may be commonly assumed and there is little evidence to support the filter bubble hypothesis (where algorithms cause people to only encounter information that reinforces their own beliefs) (footnote 3, footnote 4).
  • Uncertainty is a core aspect of scientific method, but significant dispute amongst experts can spill over to the wider public (footnote 5). This can be particularly challenging when this uncertainty is prolonged, and the topic has no clear authority. This gap between uncertainty and certainty creates information ‘deserts’ online with platforms being unable to clearly guide users to trustworthy sources (footnote 6). For example, during the COVID-19 pandemic, organisations such as the World Health Organization and the National Health Service were able to act as authoritative voices online. However, with topics such as 5G telecommunications, it has been more difficult for platforms to quickly identify trustworthy sources of evidence and advice.
  • The concept of a single ‘anti-vax’ movement is misleading and does not represent the range of different reasons for why some people are reluctant to be vaccinated (footnote 7). Those with anti-vaccination sentiments can have distinct concerns including child
    safety, or act not out of scepticism about the evidence, but out of distrust of governments. In addition, there are various actors involved in creating and spreading anti-vaccination material. These include political actors, particularly when a relevant event (eg a pandemic) is dominating the news cycle (footnote 8, footnote 9).
  • Technology can play an important though limited role in addressing misinformation content online. In particular, it can be useful in areas such as rapid detection of harmful misinformation content. Provenance enhancing technology, which provides information on the origins of online content and how it may have been altered, shows promise and will become increasingly important as misinformation content grows more sophisticated. Even now, expertly manipulated content appears to be difficult to detect. Survey experiments conducted for this report indicates that most people struggle to identify deepfake video content even when prompted (footnote 11).
  • Incentives for content production and consumption are the most significant factor to consider when evaluating the online information environment. These incentives can occur on a macro and micro level (affecting both platforms and individual users) and have been described in this report as content which exists for public benefit (eg helping others) or private benefit (eg generating financial profit). Understanding how to mitigate the role of these incentives in the spread of misinformation content requires further consideration on the economic and legal aspects of the online information environment.