Executive summary
The internet has transformed the way people consume, produce, and disseminate information about the world. In the online information environment, internet users can tailor unlimited content to their own needs and desires. This shift away from limited, gatekept, and pre-scheduled content has democratised access to knowledge and driven societal progress. The COVID-19 pandemic exemplifies this, with global researchers collaborating virtually across borders to mitigate the harms of the disease and vaccinate populations.
The unlimited volume of content, however, means that capturing attention in the online information environment is difficult and highly competitive. This heightened competition for attention presents a challenge for those who wish to communicate trustworthy information to help guide important decisions. The poor navigation or, even, active exploitation of this environment by prominent public figures and political leaders has, on many occasions, led to detrimental advice being disseminated amongst the public. This challenge has caused significant concern with online ‘misinformation’ content being widely discussed as a factor which impacts democratic elections and incites violence. In recent years, misinformation has also been identified as a challenge in relation to a range of scientific topics, including vaccine safety, climate change, and the rollout of 5G technology.
The Royal Society’s mission is to promote excellence in science and support its use for the benefit of humanity. The consumption and production of online scientific information is, therefore, of great interest. This report, The online information environment, provides an overview of how the internet has changed, and continues to change, the way society engages with scientific information, and how it may be affecting people’s decision-making behaviour – from taking up vaccines to responding to evidence on climate change. It highlights key challenges for creating a healthy online information environment and makes a series of recommendations for policymakers, academics, and online platforms.
These recommendations, when taken together, are intended to help build collective resilience to harmful misinformation content and ensure access to high quality information on both public and private forums.
The report has been guided by a working group of leading experts in this field and informed by a series of activities commissioned by the Royal Society. Firstly, literature reviews were commissioned on historical examples of scientific misinformation; the evidence surrounding echo chambers, filter bubbles, and polarisation; and the effects of information on individuals and groups. Secondly, the Society hosted various workshops and roundtables with prominent academics, fact-checking organisations, and online platforms. Finally, two surveys were commissioned – the first on people’s attitudes and behaviours towards online scientific misinformation and the second on people’s ability to detect deepfake video content.
The chapters of the report are focused on understanding and explaining essential aspects of the online information environment. They explore a broad range of topics including the ways our minds process information and how this is impacted by accessing information online; how information is generated in a digital context and the role of incentives for content production; and types of synthetic online content and their potential uses, both benign and malicious. However, there are important areas that are not covered in this report, outlined in box 1, which are part of the wider questions around trust in science, in the internet and in institutions. These include the role of traditional science communicators and the wider research community in enabling access to trustworthy information; the issue of online anonymity; and the impact that the online information environment can have on democracy and political events (eg elections).
Within this report, ‘scientific misinformation’ is defined as information which is presented as factually true but directly counters, or is refuted by, established scientific consensus. This usage includes concepts such as ‘disinformation’ which relates to the deliberate sharing of misinformation content.