Recommendations

Area for action: protecting online safety

Recommendation 1

As part of its online harms strategy, the UK Government must combat misinformation which risk societal harm as well as personalised harm, especially when it comes to a healthy environment for scientific communication.

When considering the potential damage caused by unchecked scientific misinformation online, the framing of ‘harm’, adopted by the UK Government, has focused primarily on harm caused to individuals rather than society as a whole (footnote 12). For example, this limitation risks excluding misinformation about climate change. While the commissioned YouGov survey suggests that levels of climate change denialism in the UK are very low (footnote 13), there is evidence to suggest that misinformation encouraging climate ‘inactivism’ is on the rise (footnote 14, footnote 15).

The consequences of societally harmful misinformation, including its influence on decision-makers and public support for necessary policy changes, could feasibly contribute to physical or psychological harm to individuals in future (eg through failure to mitigate climate catastrophe).

This view is complemented by our YouGov survey which suggests that the public are more likely to consider misinformation about climate change to be harmful (footnote 16) than misinformation about 5G technology (a subject which has been significantly cited within discussions on online harms (footnote 17, footnote 18, footnote 19).

There needs to be a recognition that misinformation which affects group societal interests can cause individual harm, especially to infants and future generations who do not have a voice (footnote 20). We recommend that the impact of societal harms on current and future generations, such as misinformation about climate change, is given serious consideration within the UK Government’s strategy to combat online harms.

Recommendation 2

Governments and social media platforms should not rely on content removal as a solution to online scientific misinformation.

Society benefits from honest and open discussion on the veracity of scientific claims (footnote 21). These discussions are an important part of the scientific process and should be protected. When these discussions risk causing harm to individuals or wider society, it is right to seek measures which can mitigate against this. This has often led to calls for online platforms to remove content and ban accounts (footnote 22, footnote 23, footnote 24). However, whilst this approach may be effective and essential for illegal content (eg hate speech, terrorist content, child sexual abuse material) there is little evidence to support the effectiveness of this approach for scientific misinformation, and approaches to addressing the amplification of misinformation may be more effective.

In addition, demonstrating a causal link between online misinformation and offline harm is difficult to achieve (footnote 25, footnote 26), and there is a risk that content removal may cause more harm than good by driving misinformation content (and people who may act upon it) towards harder-to-address corners of the internet (footnote 27).

Deciding what is and is not scientific misinformation is highly resource intensive (footnote 28) and not always immediately possible to achieve as some scientific topics lack consensus (footnote 29) or a trusted authority for platforms to seek advice from (footnote 30). What may be feasible and affordable for established social media platforms may be impractical or prohibitively expensive for emerging platforms which experience similar levels of engagement (eg views, uploads, users) (footnote 31).

Furthermore, removing content may exacerbate feelings of distrust and be exploited by others to promote misinformation content (footnote 32, footnote 33, footnote 34).

Finally, misinformation sometimes comes from domestic political actors, civil society groups, or individual citizens who may, in good faith, believe in the content they are spreading, even if it may be harmful to others. It is clear that they may well regard direct action against their expression as outright censorship (footnote 35, footnote 36).

Allowing content to remain on platforms with mitigations to manage its impact may be a more effective approach to prioritise. Examples of mitigations include demonetising content (eg by disabling ads on misinformation content); focusing on reducing amplification of those messages by preventing viral spread (footnote 37) or regulating the use of algorithmic recommender systems (footnote 38); and annotating content with fact-check labels (see Recommendation 3). These mechanisms would allow for open and informed discussions on scientific topics whilst acknowledging or addressing any controversies associated with the content.

As this report highlights, the online information environment has provided major benefits for collective scientific understanding by enabling the free exchange of knowledge amongst industry, academia, and members of the wider population. The Royal Society has long believed that the scientific community has a duty to communicate with the public in order to help people make informed decisions about their lives (footnote 39). Removing content and driving users away from platforms which engage with scientific authorities risks making this harder, not easier, to achieve. A more nuanced, sustainable, and focused approach towards misinformation is needed.

Recommendation 3

To support the UK’s nascent fact-checking sector, programmes which foster independence and financial sustainability are necessary. To help address complex scientific misinformation content and ‘information deserts’, fact checkers could highlight areas of growing scepticism or dispute, for deeper consideration by organisations with strong records in carrying out evidence reviews, such as the UK’s national academies and learned societies.

In response to the challenge of misinformation, a number of major online platforms have partnered with independent fact-checkers, certified by the International Fact-Checking Network (IFCN) (footnote 40), to help them identify and address misleading content (footnote 41). Google and Facebook have themselves invested in independent fact-checking (footnote 42, footnote 43). As such, fact-checkers – and wider misinformation organisations who also partner with major platforms – have become a vital part of the infrastructure which ensures a healthy online information environment. Although fact-checkers have traditionally been affiliated with traditional media companies, this association is attenuating with a number of independent, dedicated fact-checking organisations being set up (footnote 44). There are now estimated to be 290 fact-checking organisations across the world, with 40% of them based in Europe and North America (footnote 45).

A key challenge facing organisations working in the fact-checking sector is sustainable funding (footnote 46). Many are SMEs or NGOs (footnote 47). According to a 2016 Reuters Institute survey of European fact-checking organisations, more than half reported an annual expenditure of less than $50,000 and just over one quarter reported an annual expenditure of more than $100,000 (footnote 48).

A 2020 survey by the International Fact-Checking Network (IFCN) found that 43% of respondents said their main source of income was Facebook’s Third Party Fact-Checking Program (footnote 49). 42% reported their income comes from donations, memberships, or grants (footnote 50).

Providing users with tools to safely navigate the online information environment will be essential to combat harmful scientific misinformation. A survey conducted by YouGov for this report suggests there is already an appetite to fact-check information with the majority of respondents reporting that they would fact-check a suspicious or surprising scientific claim they read online (footnote 51). The important role of fact-checkers is also recognised in the UK Government’s Online Media Literacy Strategy (footnote 52). These organisations generally provide a simple mechanism for users to verify the validity of claims made online and play an important role in informing content-moderation decisions. They provide a public benefit, form a core part of anti-misinformation initiatives, and should be supported. Should the financial viability of organisations in this nascent sector collapse, it could have detrimental effects for the health of the online information environment. As impartiality and financial independence is critical to trust in these organisations, their options for funding are limited (footnote 53). Philanthropic foundations and other grant funders are likely to continue to be necessary in the short to medium term. Platforms, funders and government need to consider sustainable models for long-term funding in this sector.

Furthermore, organisations with expertise in evidence synthesis (such as the UK’s national academies) have a role to play and should be engaging with fact-checking organisations to help provide clarity on complex scientific misinformation content where feasible. This could involve fact-checkers highlighting areas of growing scepticism or dispute as being in need of deeper consideration, in order to address the challenges associated with information deserts where there is no clearly recognised scientific authority.

Recommendation 4

Ofcom must consider interventions for countering misinformation beyond high-risk, high-reach social media platforms.

Under plans set out in the UK Government’s Draft Online Safety Bill, regulations will apply depending on the number of users and/or the type of functionalities which exist on an online platform (footnote 54). Category 1 services, described as ‘high-risk, high-reach services’ will be expected to take action on content deemed to be legal but harmful (footnote 55), which misinformation is likely to fall under (footnote 56). These services will likely include mainstream social media platforms such as Facebook, YouTube, Twitter, and TikTok.

Given the size of these platforms, it is right for them to take appropriate action against harmful misinformation. However, many of these platforms are already taking steps to mitigate the effects of misinformation (footnote 57) and it is not clear that focusing on these high-reach services alone is enough to reduce the effects of harmful misinformation. This focus risks excluding small platforms, with significantly lower reach, from higher scrutiny. Some of these smaller platforms host harmful content banned elsewhere, garnering hundreds of thousands of views (footnote 58).

It is also unclear whether others, such as online retailers, will be expected to take action on harmful but legal content, despite there being examples of scientific misinformation content being promoted on their platforms (footnote 59).

Only a minority of internet users believe in the most prominent examples of scientific misinformation (footnote 60). It may well be the case that this minority of users consume harmful misinformation content on fringe online platforms. However, by prioritising mainstream social media platforms, there is a risk that Ofcom will lack the necessary authority and capacity to address misinformation which exists elsewhere in the online information environment. We recommend a careful consideration of which platforms to focus interventions on and advise Ofcom to include fringe online platforms within their focus.

Recommendation 5

Online platforms and scientific authorities should consider designing interventions for countering misinformation on private messaging platforms.

As users shift away from conversations on open, public platforms in favour of closed, private forums (footnote 61, footnote 62), it is likely to become more difficult to analyse the online information environment and design interventions to counter misinformation. This shift will require a re-analysis of society’s collective understanding of how information spreads online as lessons learned from public social media platforms are difficult to translate to private forums (footnote 63).

Designing interventions which preserve end-to-end encryption is essential for ensuring the security and privacy of people’s conversations (footnote 64). It is therefore necessary to design interventions which do not require prior knowledge of a message’s content. Current examples of these include mechanisms to understand how messages spread (footnote 65) or to limit the number of times they can be shared (footnote 66), an option to forward a message to a fact-checker (footnote 67), and the creation of official accounts for scientific authorities (footnote 68). Provenance enhancing technologies also present a potential solution here (see Chapter 3) (footnote 69). These technologies would work by providing users with information about the origins (provenance) of a piece of online content as well as details of any alterations made to it (footnote 70). This could provide a tool to help users verify the validity of any text, images, or videos they receive on a private or public communications channel.

Assuming trends towards private messaging continue (footnote 71), misinformation content is likely to become less visible to researchers, regulators, and the platforms themselves. This will therefore become an increasingly important area for those interested in fostering a healthy online information environment. Online platforms and scientific authorities need to consider this behaviour shift in information consumption and design interventions which can promote good quality information and mitigate any harmful effects from misinformation.

Area of action: enabling greater understanding of the online information environment

Recommendation 6

Social media platforms should establish ways to allow independent researchers access to data in a privacy compliant and secure manner.

Understanding the nature of information production and consumption is critical to ensuring society is prepared for future challenges which arise from the online information environment (footnote 72). Analysis of the rich datasets held by social media platforms can help decision-makers understand the extent of harmful online content, how influential it is, and who is producing it. It should also help enable transparent, independent assessments of the effectiveness of counter-misinformation interventions.

The open nature of some platforms (eg Twitter) makes independent research easier to undertake whilst the more restricted nature of other platforms (eg Facebook, YouTube, TikTok) makes this more difficult (footnote 73).

Designing a solution to this and ensuring access to useful data for researchers is highly complex with significant challenges related to privacy, usability, and computing power (footnote 74).

Attempts to do so, such as Social Science One (footnote 75), have faced criticism from funders (footnote 76) and academics (footnote 77) for delays and insufficient access.

Developing a safe and privacy preserving means for independent and impartial analysis, such as a trusted research environment (footnote 78), is an important challenge for Research Councils, Legal Deposit Libraries, and social media platforms to overcome. Social media platforms have ultimate control of this data and should commence, or continue, efforts to find ways to provide access for independent researchers in a secure and privacy compliant manner.

The Royal Society has an ongoing programme of work related to privacy-preserving data analysis and the role of technology in protecting data subjects and is exploring past attempts, existing barriers, and viable solutions to enable privacy-preserving analysis of data (footnote 79).

Recommendation 7

Focusing solely on the needs of current online platforms risks a repetition of existing problems, as new, underprepared, platforms emerge and gain popularity. To promote standards and guide start-ups, interested parties need to collaborate to develop examples of best practice for countering misinformation as well as datasets, tools, software libraries, and standardised benchmarks.

It is important to consider the health of the online information environment beyond the currently dominant online platforms. New platforms which grow quickly face a challenge of having to address large amounts of misinformation content without the benefit of years of experience (footnote 80). Focusing solely on the needs of current online platforms risks a repetition of the same problems as new, underprepared, platforms emerge and gain popularity.

A particular challenge is the lack of data new platforms will have access to, in order to train automated detection systems for misinformation content (footnote 81). There are already some encouraging examples of attempts to create datasets (footnote 82) and machine learning models (footnote 83) to assist with this problem. Researchers, policymakers, and platforms must work together to develop further similar initiatives. These should be developed and implemented in a secure, privacy-compliant manner, and published under open licenses, allowing reuse. To ensure high quality data input for machine learning models, the development of data assurance practices should be encouraged (footnote 84).

Knowledge for how best to ensure a healthy online information environment exists within various fields of expertise, including computational sociology (footnote 85), open-source intelligence (footnote 86), library and information science (footnote 87), and media literacy (footnote 88). As such, calls for collaboration should encompass all interested parties who can usefully contribute to the development of best practice tools and guidance for future online platforms.

Area of action: creating a healthy and trustworthy online information environment

Recommendation 8

Governments and online platforms should implement policies that support healthy and sustainable media plurality.

Many news outlets are a key source of good quality (footnote 89) and trusted (footnote 90) information. The online information environment has provided, and continues to provide, an ecosystem which allows for increased media plurality with few barriers to entry (footnote 91, footnote 92). It is a feature which exposes users to a wide range of viewpoints and prevents a concentration of influence over public opinion (footnote 93). Reporting about science has also benefited from this plurality with new science and technology media outlets gaining significant online followings (footnote 94).

Moves to elevate or prioritise content from ‘trustworthy’ news outlets (footnote 95) in social media feeds presents a risk to online media plurality, is likely to favour established, traditional media outlets over new media outlets (footnote 96), and would not necessarily reduce exposure to misinformation content (footnote 97). Although strong arguments have been put forward for online platforms to determine the quality of news content (footnote 98), efforts to compare and rate the trustworthiness of different media outlets (eg with nutrition labels) have proven to be complex with some attempts attracting controversy (footnote 99, footnote 100).

Furthermore, unilateral decisions about how algorithms present news content in social media feeds and search engines can negatively impact the reach, traffic, and economic performance of both traditional and new media outlets (footnote 101).

Governments and online platforms need to consider the impact of any future policies on media plurality and take action to ensure a sustainable future for public interest journalism (footnote 102). Robust, diverse, independent news media and education (see Recommendation 9) together can make people more resilient in the face of any potentially harmful misinformation they come across.

Recommendation 9

The UK Government should invest in lifelong, nationwide, information literacy initiatives.

Ensuring that current and future populations can safely navigate the online information environment will require significant investment in digital information literacy, ensuring that people can effectively evaluate online content. In practice, this could include education on how to assess URLs (footnote 103), how to reverse image search (footnote 104), and how to identify a deepfake (footnote 105).

This education should not be limited to those in schools, colleges, and universities, but extended to all people of all ages. Older adults face a particular challenge with misinformation as they are more likely to be targeted and more likely to be susceptible than younger adults (footnote 106). These groups could be reached through public information campaigns, in workplaces, or on social media platforms. Current initiatives such as the UK Government’s ‘Don’t Feed the Beast’ campaign (footnote 107) and the Check Before You Share toolkit (footnote 108) should be assessed for their effectiveness and improved where necessary.

As the nature of the online information environment is likely to continue evolving over time with new platforms, technologies, actors, and techniques, it is important to consider information literacy as a life skill, supplemented with lifelong learning. These initiatives should be carefully tailored and designed to support people from a broad range of demographics.

There have been widespread calls (footnote 109, footnote 110, footnote 111, footnote 112) for digital information literacy to form a core part of future strategies to ensure people can safely navigate the online information environment. Successful implementation of the UK Government’s Online Media Literacy Strategy is an important next step (footnote 113).

Area of action: enabling access to scientific information

Recommendation 10

Academic journals and institutions should continue to work together to enable open access publishing of academic research.

The ability to easily share and find high quality information is one of the greatest benefits of the online information environment and likely explains why the majority of respondents to the Society’s survey believe the internet has improved the public’s understanding of science (footnote 114). In particular, the internet’s role in opening access to academic research, which would otherwise be locked within physical journals, can often be transformative for society’s collective understanding of the world.

Ensuring ease of access to academic research online helps promote more accurate verification of results, reduces duplication of work, and improves public trust in science (footnote 115). As strong supporters of open science (footnote 116), the Royal Society is currently working towards transitioning its own primary research journals to open access which will help maximise the dissemination and impact of high-quality scientific research (footnote 117).

The COVID-19 pandemic has further incentivised the need for open access publishing (footnote 118, footnote 119), and has demonstrated its benefits (footnote 120). These benefits can and should be realised for a broad range of societal problems, beyond the pandemic. Moves towards open access publishing (footnote 121) are to be welcomed, and academic journals and institutions should work together to enable further open access publishing of academic research.

Novel aspects of open access research, such as the growing popularity of preprints (footnote 122) or the use of citations as an indicator of quality (footnote 123), have been subject to debate in recent years. We note these concerns and encourage institutions to consider lessons learned for the next generation of academic publishing.

Recommendation 11

The frameworks governing electronic legal deposit should be reviewed and reformed to allow better access to archived digital content.

In 2013, the UK Government introduced new regulations that required digital publications to be systematically preserved as part of something known as legal deposit. Legal deposit has existed in English law since 1662 and obliges publishers to place at least one copy of everything they publish in the UK and Ireland – from books to music and maps – at a designated library.

Since it was extended to include digital media, the six designated legal deposit libraries in the UK have accumulated around 700 terabytes of archived web data as part of the UK Web Archive, growing by around 70TB every year. The libraries automatically collect – or crawl – UK websites at least once a year to gather a snapshot of what they contain, while some important websites such as news sites are collected daily. They also collect ebooks, electronic journals, videos, pdfs and social media posts – almost everything that is available in a digital format.

Access to this material is extremely limited. Due to the current legislative framework, historic pages for only around 19,000 or so websites can be accessed through the Web Archive’s online portal. These are sites where their creators have given explicit permission to allow open access to their content, however contacting every UK website in this way is almost impossible. For the rest, even though access is permitted, and the material is held digitally, researchers must travel to one of nine named sites in person. The framework also permits only one researcher to use a piece of material at any one time; an arbitrary limitation when it comes to digital access.

This framework for access is now out-of-date to how people access and use data, and severely limits the value that trustworthy libraries and archives are able to offer (footnote 124). Opening up the Web Archive would allow it to be mined at scale for high quality information using modern text analysis methods or artificial intelligence. It would enable researchers, businesses, journalists and anyone else with an interest to uncover trends or information hidden in web pages from the past. This will become increasingly important as the online information environment matures and vital source material is digitally archived (see Chapter 4).

The frameworks governing electronic legal deposit need to be reviewed and reformed to allow wider access. Such a review would need to consider the data held in these legal deposits that remains commercially valuable, such as newspaper archives. Rather than act as a barrier to access, systems such as micropayments – like those to authors of books borrowed from libraries already – could be applied to such material in order to support broader access.