Recommendations
Area for action: protecting online safety
Recommendation 1
As part of its online harms strategy, the UK Government must combat misinformation which risk societal harm as well as personalised harm, especially when it comes to a healthy environment for scientific communication.
When considering the potential damage caused by unchecked scientific misinformation online, the framing of ‘harm’, adopted by the UK Government, has focused primarily on harm caused to individuals rather than society as a whole (footnote 12). For example, this limitation risks excluding misinformation about climate change. While the commissioned YouGov survey suggests that levels of climate change denialism in the UK are very low (footnote 13), there is evidence to suggest that misinformation encouraging climate ‘inactivism’ is on the rise (footnote 14, footnote 15).
The consequences of societally harmful misinformation, including its influence on decision-makers and public support for necessary policy changes, could feasibly contribute to physical or psychological harm to individuals in future (eg through failure to mitigate climate catastrophe).
This view is complemented by our YouGov survey which suggests that the public are more likely to consider misinformation about climate change to be harmful (footnote 16) than misinformation about 5G technology (a subject which has been significantly cited within discussions on online harms (footnote 17, footnote 18, footnote 19).
There needs to be a recognition that misinformation which affects group societal interests can cause individual harm, especially to infants and future generations who do not have a voice (footnote 20). We recommend that the impact of societal harms on current and future generations, such as misinformation about climate change, is given serious consideration within the UK Government’s strategy to combat online harms.
Recommendation 2
Governments and social media platforms should not rely on content removal as a solution to online scientific misinformation.
Society benefits from honest and open discussion on the veracity of scientific claims (footnote 21). These discussions are an important part of the scientific process and should be protected. When these discussions risk causing harm to individuals or wider society, it is right to seek measures which can mitigate against this. This has often led to calls for online platforms to remove content and ban accounts (footnote 22, footnote 23, footnote 24). However, whilst this approach may be effective and essential for illegal content (eg hate speech, terrorist content, child sexual abuse material) there is little evidence to support the effectiveness of this approach for scientific misinformation, and approaches to addressing the amplification of misinformation may be more effective.
In addition, demonstrating a causal link between online misinformation and offline harm is difficult to achieve (footnote 25, footnote 26), and there is a risk that content removal may cause more harm than good by driving misinformation content (and people who may act upon it) towards harder-to-address corners of the internet (footnote 27).
Deciding what is and is not scientific misinformation is highly resource intensive (footnote 28) and not always immediately possible to achieve as some scientific topics lack consensus (footnote 29) or a trusted authority for platforms to seek advice from (footnote 30). What may be feasible and affordable for established social media platforms may be impractical or prohibitively expensive for emerging platforms which experience similar levels of engagement (eg views, uploads, users) (footnote 31).
Furthermore, removing content may exacerbate feelings of distrust and be exploited by others to promote misinformation content (footnote 32, footnote 33, footnote 34).
Finally, misinformation sometimes comes from domestic political actors, civil society groups, or individual citizens who may, in good faith, believe in the content they are spreading, even if it may be harmful to others. It is clear that they may well regard direct action against their expression as outright censorship (footnote 35, footnote 36).
Allowing content to remain on platforms with mitigations to manage its impact may be a more effective approach to prioritise. Examples of mitigations include demonetising content (eg by disabling ads on misinformation content); focusing on reducing amplification of those messages by preventing viral spread (footnote 37) or regulating the use of algorithmic recommender systems (footnote 38); and annotating content with fact-check labels (see Recommendation 3). These mechanisms would allow for open and informed discussions on scientific topics whilst acknowledging or addressing any controversies associated with the content.
As this report highlights, the online information environment has provided major benefits for collective scientific understanding by enabling the free exchange of knowledge amongst industry, academia, and members of the wider population. The Royal Society has long believed that the scientific community has a duty to communicate with the public in order to help people make informed decisions about their lives (footnote 39). Removing content and driving users away from platforms which engage with scientific authorities risks making this harder, not easier, to achieve. A more nuanced, sustainable, and focused approach towards misinformation is needed.
Recommendation 3
To support the UK’s nascent fact-checking sector, programmes which foster independence and financial sustainability are necessary. To help address complex scientific misinformation content and ‘information deserts’, fact checkers could highlight areas of growing scepticism or dispute, for deeper consideration by organisations with strong records in carrying out evidence reviews, such as the UK’s national academies and learned societies.
In response to the challenge of misinformation, a number of major online platforms have partnered with independent fact-checkers, certified by the International Fact-Checking Network (IFCN) (footnote 40), to help them identify and address misleading content (footnote 41). Google and Facebook have themselves invested in independent fact-checking (footnote 42, footnote 43). As such, fact-checkers – and wider misinformation organisations who also partner with major platforms – have become a vital part of the infrastructure which ensures a healthy online information environment. Although fact-checkers have traditionally been affiliated with traditional media companies, this association is attenuating with a number of independent, dedicated fact-checking organisations being set up (footnote 44). There are now estimated to be 290 fact-checking organisations across the world, with 40% of them based in Europe and North America (footnote 45).
A key challenge facing organisations working in the fact-checking sector is sustainable funding (footnote 46). Many are SMEs or NGOs (footnote 47). According to a 2016 Reuters Institute survey of European fact-checking organisations, more than half reported an annual expenditure of less than $50,000 and just over one quarter reported an annual expenditure of more than $100,000 (footnote 48).
A 2020 survey by the International Fact-Checking Network (IFCN) found that 43% of respondents said their main source of income was Facebook’s Third Party Fact-Checking Program (footnote 49). 42% reported their income comes from donations, memberships, or grants (footnote 50).
Providing users with tools to safely navigate the online information environment will be essential to combat harmful scientific misinformation. A survey conducted by YouGov for this report suggests there is already an appetite to fact-check information with the majority of respondents reporting that they would fact-check a suspicious or surprising scientific claim they read online (footnote 51). The important role of fact-checkers is also recognised in the UK Government’s Online Media Literacy Strategy (footnote 52). These organisations generally provide a simple mechanism for users to verify the validity of claims made online and play an important role in informing content-moderation decisions. They provide a public benefit, form a core part of anti-misinformation initiatives, and should be supported. Should the financial viability of organisations in this nascent sector collapse, it could have detrimental effects for the health of the online information environment. As impartiality and financial independence is critical to trust in these organisations, their options for funding are limited (footnote 53). Philanthropic foundations and other grant funders are likely to continue to be necessary in the short to medium term. Platforms, funders and government need to consider sustainable models for long-term funding in this sector.
Furthermore, organisations with expertise in evidence synthesis (such as the UK’s national academies) have a role to play and should be engaging with fact-checking organisations to help provide clarity on complex scientific misinformation content where feasible. This could involve fact-checkers highlighting areas of growing scepticism or dispute as being in need of deeper consideration, in order to address the challenges associated with information deserts where there is no clearly recognised scientific authority.
Recommendation 4
Ofcom must consider interventions for countering misinformation beyond high-risk, high-reach social media platforms.
Under plans set out in the UK Government’s Draft Online Safety Bill, regulations will apply depending on the number of users and/or the type of functionalities which exist on an online platform (footnote 54). Category 1 services, described as ‘high-risk, high-reach services’ will be expected to take action on content deemed to be legal but harmful (footnote 55), which misinformation is likely to fall under (footnote 56). These services will likely include mainstream social media platforms such as Facebook, YouTube, Twitter, and TikTok.
Given the size of these platforms, it is right for them to take appropriate action against harmful misinformation. However, many of these platforms are already taking steps to mitigate the effects of misinformation (footnote 57) and it is not clear that focusing on these high-reach services alone is enough to reduce the effects of harmful misinformation. This focus risks excluding small platforms, with significantly lower reach, from higher scrutiny. Some of these smaller platforms host harmful content banned elsewhere, garnering hundreds of thousands of views (footnote 58).
It is also unclear whether others, such as online retailers, will be expected to take action on harmful but legal content, despite there being examples of scientific misinformation content being promoted on their platforms (footnote 59).
Only a minority of internet users believe in the most prominent examples of scientific misinformation (footnote 60). It may well be the case that this minority of users consume harmful misinformation content on fringe online platforms. However, by prioritising mainstream social media platforms, there is a risk that Ofcom will lack the necessary authority and capacity to address misinformation which exists elsewhere in the online information environment. We recommend a careful consideration of which platforms to focus interventions on and advise Ofcom to include fringe online platforms within their focus.
Recommendation 5
Online platforms and scientific authorities should consider designing interventions for countering misinformation on private messaging platforms.
As users shift away from conversations on open, public platforms in favour of closed, private forums (footnote 61, footnote 62), it is likely to become more difficult to analyse the online information environment and design interventions to counter misinformation. This shift will require a re-analysis of society’s collective understanding of how information spreads online as lessons learned from public social media platforms are difficult to translate to private forums (footnote 63).
Designing interventions which preserve end-to-end encryption is essential for ensuring the security and privacy of people’s conversations (footnote 64). It is therefore necessary to design interventions which do not require prior knowledge of a message’s content. Current examples of these include mechanisms to understand how messages spread (footnote 65) or to limit the number of times they can be shared (footnote 66), an option to forward a message to a fact-checker (footnote 67), and the creation of official accounts for scientific authorities (footnote 68). Provenance enhancing technologies also present a potential solution here (see Chapter 3) (footnote 69). These technologies would work by providing users with information about the origins (provenance) of a piece of online content as well as details of any alterations made to it (footnote 70). This could provide a tool to help users verify the validity of any text, images, or videos they receive on a private or public communications channel.
Assuming trends towards private messaging continue (footnote 71), misinformation content is likely to become less visible to researchers, regulators, and the platforms themselves. This will therefore become an increasingly important area for those interested in fostering a healthy online information environment. Online platforms and scientific authorities need to consider this behaviour shift in information consumption and design interventions which can promote good quality information and mitigate any harmful effects from misinformation.
Area of action: enabling greater understanding of the online information environment
Recommendation 6
Social media platforms should establish ways to allow independent researchers access to data in a privacy compliant and secure manner.
Understanding the nature of information production and consumption is critical to ensuring society is prepared for future challenges which arise from the online information environment (footnote 72). Analysis of the rich datasets held by social media platforms can help decision-makers understand the extent of harmful online content, how influential it is, and who is producing it. It should also help enable transparent, independent assessments of the effectiveness of counter-misinformation interventions.
The open nature of some platforms (eg Twitter) makes independent research easier to undertake whilst the more restricted nature of other platforms (eg Facebook, YouTube, TikTok) makes this more difficult (footnote 73).
Designing a solution to this and ensuring access to useful data for researchers is highly complex with significant challenges related to privacy, usability, and computing power (footnote 74).
Attempts to do so, such as Social Science One (footnote 75), have faced criticism from funders (footnote 76) and academics (footnote 77) for delays and insufficient access.
Developing a safe and privacy preserving means for independent and impartial analysis, such as a trusted research environment (footnote 78), is an important challenge for Research Councils, Legal Deposit Libraries, and social media platforms to overcome. Social media platforms have ultimate control of this data and should commence, or continue, efforts to find ways to provide access for independent researchers in a secure and privacy compliant manner.
The Royal Society has an ongoing programme of work related to privacy-preserving data analysis and the role of technology in protecting data subjects and is exploring past attempts, existing barriers, and viable solutions to enable privacy-preserving analysis of data (footnote 79).
Recommendation 7
Focusing solely on the needs of current online platforms risks a repetition of existing problems, as new, underprepared, platforms emerge and gain popularity. To promote standards and guide start-ups, interested parties need to collaborate to develop examples of best practice for countering misinformation as well as datasets, tools, software libraries, and standardised benchmarks.
It is important to consider the health of the online information environment beyond the currently dominant online platforms. New platforms which grow quickly face a challenge of having to address large amounts of misinformation content without the benefit of years of experience (footnote 80). Focusing solely on the needs of current online platforms risks a repetition of the same problems as new, underprepared, platforms emerge and gain popularity.
A particular challenge is the lack of data new platforms will have access to, in order to train automated detection systems for misinformation content (footnote 81). There are already some encouraging examples of attempts to create datasets (footnote 82) and machine learning models (footnote 83) to assist with this problem. Researchers, policymakers, and platforms must work together to develop further similar initiatives. These should be developed and implemented in a secure, privacy-compliant manner, and published under open licenses, allowing reuse. To ensure high quality data input for machine learning models, the development of data assurance practices should be encouraged (footnote 84).
Knowledge for how best to ensure a healthy online information environment exists within various fields of expertise, including computational sociology (footnote 85), open-source intelligence (footnote 86), library and information science (footnote 87), and media literacy (footnote 88). As such, calls for collaboration should encompass all interested parties who can usefully contribute to the development of best practice tools and guidance for future online platforms.
Area of action: creating a healthy and trustworthy online information environment
Recommendation 8
Governments and online platforms should implement policies that support healthy and sustainable media plurality.
Many news outlets are a key source of good quality (footnote 89) and trusted (footnote 90) information. The online information environment has provided, and continues to provide, an ecosystem which allows for increased media plurality with few barriers to entry (footnote 91, footnote 92). It is a feature which exposes users to a wide range of viewpoints and prevents a concentration of influence over public opinion (footnote 93). Reporting about science has also benefited from this plurality with new science and technology media outlets gaining significant online followings (footnote 94).
Moves to elevate or prioritise content from ‘trustworthy’ news outlets (footnote 95) in social media feeds presents a risk to online media plurality, is likely to favour established, traditional media outlets over new media outlets (footnote 96), and would not necessarily reduce exposure to misinformation content (footnote 97). Although strong arguments have been put forward for online platforms to determine the quality of news content (footnote 98), efforts to compare and rate the trustworthiness of different media outlets (eg with nutrition labels) have proven to be complex with some attempts attracting controversy (footnote 99, footnote 100).
Furthermore, unilateral decisions about how algorithms present news content in social media feeds and search engines can negatively impact the reach, traffic, and economic performance of both traditional and new media outlets (footnote 101).
Governments and online platforms need to consider the impact of any future policies on media plurality and take action to ensure a sustainable future for public interest journalism (footnote 102). Robust, diverse, independent news media and education (see Recommendation 9) together can make people more resilient in the face of any potentially harmful misinformation they come across.
Recommendation 9
The UK Government should invest in lifelong, nationwide, information literacy initiatives.
Ensuring that current and future populations can safely navigate the online information environment will require significant investment in digital information literacy, ensuring that people can effectively evaluate online content. In practice, this could include education on how to assess URLs (footnote 103), how to reverse image search (footnote 104), and how to identify a deepfake (footnote 105).
This education should not be limited to those in schools, colleges, and universities, but extended to all people of all ages. Older adults face a particular challenge with misinformation as they are more likely to be targeted and more likely to be susceptible than younger adults (footnote 106). These groups could be reached through public information campaigns, in workplaces, or on social media platforms. Current initiatives such as the UK Government’s ‘Don’t Feed the Beast’ campaign (footnote 107) and the Check Before You Share toolkit (footnote 108) should be assessed for their effectiveness and improved where necessary.
As the nature of the online information environment is likely to continue evolving over time with new platforms, technologies, actors, and techniques, it is important to consider information literacy as a life skill, supplemented with lifelong learning. These initiatives should be carefully tailored and designed to support people from a broad range of demographics.
There have been widespread calls (footnote 109, footnote 110, footnote 111, footnote 112) for digital information literacy to form a core part of future strategies to ensure people can safely navigate the online information environment. Successful implementation of the UK Government’s Online Media Literacy Strategy is an important next step (footnote 113).
Area of action: enabling access to scientific information
Recommendation 10
Academic journals and institutions should continue to work together to enable open access publishing of academic research.
The ability to easily share and find high quality information is one of the greatest benefits of the online information environment and likely explains why the majority of respondents to the Society’s survey believe the internet has improved the public’s understanding of science (footnote 114). In particular, the internet’s role in opening access to academic research, which would otherwise be locked within physical journals, can often be transformative for society’s collective understanding of the world.
Ensuring ease of access to academic research online helps promote more accurate verification of results, reduces duplication of work, and improves public trust in science (footnote 115). As strong supporters of open science (footnote 116), the Royal Society is currently working towards transitioning its own primary research journals to open access which will help maximise the dissemination and impact of high-quality scientific research (footnote 117).
The COVID-19 pandemic has further incentivised the need for open access publishing (footnote 118, footnote 119), and has demonstrated its benefits (footnote 120). These benefits can and should be realised for a broad range of societal problems, beyond the pandemic. Moves towards open access publishing (footnote 121) are to be welcomed, and academic journals and institutions should work together to enable further open access publishing of academic research.
Novel aspects of open access research, such as the growing popularity of preprints (footnote 122) or the use of citations as an indicator of quality (footnote 123), have been subject to debate in recent years. We note these concerns and encourage institutions to consider lessons learned for the next generation of academic publishing.
Recommendation 11
The frameworks governing electronic legal deposit should be reviewed and reformed to allow better access to archived digital content.
In 2013, the UK Government introduced new regulations that required digital publications to be systematically preserved as part of something known as legal deposit. Legal deposit has existed in English law since 1662 and obliges publishers to place at least one copy of everything they publish in the UK and Ireland – from books to music and maps – at a designated library.
Since it was extended to include digital media, the six designated legal deposit libraries in the UK have accumulated around 700 terabytes of archived web data as part of the UK Web Archive, growing by around 70TB every year. The libraries automatically collect – or crawl – UK websites at least once a year to gather a snapshot of what they contain, while some important websites such as news sites are collected daily. They also collect ebooks, electronic journals, videos, pdfs and social media posts – almost everything that is available in a digital format.
Access to this material is extremely limited. Due to the current legislative framework, historic pages for only around 19,000 or so websites can be accessed through the Web Archive’s online portal. These are sites where their creators have given explicit permission to allow open access to their content, however contacting every UK website in this way is almost impossible. For the rest, even though access is permitted, and the material is held digitally, researchers must travel to one of nine named sites in person. The framework also permits only one researcher to use a piece of material at any one time; an arbitrary limitation when it comes to digital access.
This framework for access is now out-of-date to how people access and use data, and severely limits the value that trustworthy libraries and archives are able to offer (footnote 124). Opening up the Web Archive would allow it to be mined at scale for high quality information using modern text analysis methods or artificial intelligence. It would enable researchers, businesses, journalists and anyone else with an interest to uncover trends or information hidden in web pages from the past. This will become increasingly important as the online information environment matures and vital source material is digitally archived (see Chapter 4).
The frameworks governing electronic legal deposit need to be reviewed and reformed to allow wider access. Such a review would need to consider the data held in these legal deposits that remains commercially valuable, such as newspaper archives. Rather than act as a barrier to access, systems such as micropayments – like those to authors of books borrowed from libraries already – could be applied to such material in order to support broader access.
Footnotes
-
12. HM Government. 2020 Online Harms White Paper. See: https://www.gov.uk/government/consultations/online-harms-white-paper (accessed 4 November 2021).
Back to report -
13. 5% do not believe human activity is responsible for climate change. Royal Society / YouGov, July 2021.
Back to report -
14. Coan T, Boussalis C, Cook J, Nanko M. 2021 Computer-assisted detections and classification of misinformation about climate change. SocArXiv (doi:10.31235/osf.io/crxfm).
Back to report -
15. Avaaz. 2021 Facebook’s Climate of Deception: How Viral Misinformation Fuels the Climate Emergency. See https://secure.avaaz.org/campaign/en/facebook_climate_misinformation/ (accessed 4 November 2021).
Back to report -
16. 83% consider misinformation about climate change to be harmful, 67% consider misinformation about 5G technology to be harmful. Royal Society / YouGov, July 2021.
Back to report -
17. HM Government. 2021 Minister launches new strategy to fight online disinformation. See https://www.gov.uk/ government/news/minister-launches-new-strategy-to-fight-online-disinformation (accessed 4 November 2021).
Back to report -
18. HM Government. 2020 Online Harms White Paper. See: https://www.gov.uk/government/consultations/online-harms-white-paper (accessed 4 November 2021).
Back to report -
19. Hansard. Debate on Online Harms. See https://hansard.parliament.uk/commons/2020-11-19/debates/29AA4774-FDE3-4AB9-BBAB-F072DE3E8074/OnlineHarms (accessed 4 November 2021).
Back to report -
20. Robinson K. 2020 The Ministry for the Future. London, UK: Orbit Books.
Back to report -
21. Smith L, Stern N. 2011 Uncertainty in science and its role in climate policy. Phil. Trans. R. Soc. A. 369: 4818-4841. (doi.org/10.1098/rsta.2011.0149).
Back to report -
22. UK Labour Party. Labour calls for emergency legislation to “stamp out dangerous anti vax content”. See https://labour.org. uk/press/labour-calls-for-emergency-legislation-to-stamp-out-dangerous-anti-vax-content/ (accessed 4 November 2021).
Back to report -
23. Covid vaccine: Social media urged to remove ‘disinfo dozen’. BBC News. 26 March 2021. See https://www.bbc.co.uk/ news/technology-56536390 (accessed 4 November 2021).
Back to report -
24. Priti Patel urges social media to remove antivax posts. The Times. 11 February 2021. See https://www.thetimes.co.uk/ article/priti-patel-tells-social-media-to-remove-antivax-posts-77ggm5tjn (accessed 4 November 2021).
Back to report -
25. Miró-Llinares F, Aguerri J. 2021 Misinformation about fake news: A systematic critical review of empirical studies on the phenomenon and its status as a ‘threat’. European Journal of Criminology. (doi.org/10.1177/1477370821994059).
Back to report -
26. Greene C, Murphy G. 2021 Quantifying the effects of fake news on behaviour: Evidence from a study on COVID-19 misinformation. Journal of Experimental Psychology. Applied. (doi.org/10.1037/xap0000371).
Back to report -
27. Royal Society roundtable with Major Technology Organisations, March 2021.
Back to report -
28. The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People. Motherboard. 23 August 2018. See https://www.vice.com/en/article/xwk9zd/how-facebook-content-moderation-works (accessed 4 November 2021).
Back to report -
29. Facebook lifts ban on posts claiming Covid-19 was man-made. The Guardian. 27 May 2021. See https://www.theguardian.com/technology/2021/may/27/facebook-lifts-ban-on-posts-claiming-covid-19-was-man-made (accessed 4 November 2021).
Back to report -
30. Royal Society roundtable with Major Technology Organisations, March 2021.
Back to report -
31. Ibid.
Back to report -
32. BrandNewTube. Ask The Experts (Covid 19 Vaccine) – Now Banned on YouTube and Facebook. See https://brandnewtube.com/watch/ask-the-experts-covid-19-vaccine-now-banned-on-youtube-and-facebook_ qIsNohSIeSgfz2J.html (accessed 4 November 2021).
Back to report -
33. Banned.Video – the most banned videos on the internet. See https://www.banned.video/ (accessed 4 November 2021).
Back to report -
34. Jansen S, Martin B. 2015 The Streisand Effect and Censorship Backfire. International Journal of Communication 9, 16.
Back to report -
35. Trump says he will sue social media giants over ‘censorship’. The Guardian. 7 July 2021. See https://www.theguardian. com/us-news/2021/jul/07/donald-trump-facebook-twitter-google-lawsuit (accessed 4 November 2021).
Back to report -
36. Censorship concerns as talkRadio removed from YouTube. Society of Editors. 5 January 2021. See https://www.societyofeditors.org/soe_news/censorship-concerns-as-talkradio-removed-from-youtube/.
Back to report -
37. See Chapter 3 – tools and approaches for countering misinformation.
Back to report -
38. Cobbe J, Singh J. 2019 Regulating recommending: Motivations, considerations, and principles. European Journal of Law and Technology 10, 3.
Back to report -
39. The Royal Society. 1985 The Public Understanding of Science. See https://royalsociety.org/topics-policy/ publications/1985/public-understanding-science/ (accessed 4 November 2021).
Back to report -
40. IFCN-certified fact-checkers sign up to a Code of Principles (eg a commitment to transparency).
Back to report -
41. Royal Society roundtable with Major Technology Organisations, March 2021.
Back to report -
42. COVID-19: $6.5million to help fight coronavirus misinformation. Google News Initiative. 2 April 2020. See https://www.blog.google/outreach-initiatives/google-news-initiative/covid-19-65-million-help-fight-coronavirus-misinformation/ (accessed 4 November 2021).
Back to report -
43. Facebook’s investments in fact-checking and media literacy Facebook Journalism Project.15 June 2021. See https://www.facebook.com/journalismproject/programs/third-party-fact-checking/industry-investments (accessed 4 November 2021).
Back to report -
44. The Fact-Check Industry. Columbia Journalism Review. 2019. See https://www.cjr.org/special_report/fact-check-industry-twitter.php (accessed 4 November 2021).
Back to report -
45. Annual census finds nearly 300 fact-checking projects around the world. Duke Reporters’ Lab. 22 June 2020. See https://reporterslab.org/annual-census-finds-nearly-300-fact-checking-projects-around-the-world/ (accessed 4 November 2021).
Back to report -
46. Royal Society roundtable with Safety Technology Organisations, March 2021.
Back to report -
47. Ibid.
Back to report -
48. Reuters Institute for the Study of Journalism. 2016 The Rise of Fact-Checking Sites in Europe. See https://reutersinstitute.politics.ox.ac.uk/our-research/rise-fact-checking-sites-europe (accessed 4 November 2021).
Back to report -
49. International Fact-Checking Network. 2020 State of Fact Checking 2020. See https://www.poynter.org/wp-content/ uploads/2020/06/IFCN_2020_state-of-fact-checking_ok.pdf (accessed 4 November 2021). This survey was 80 organisations that are either current verified signatories of the IFCN Code of Principles or undergoing the renewal process.
Back to report -
50. Ibid.
Back to report -
51. 68% said they would be likely to fact-check a suspicious scientific claim they saw online. Royal Society / YouGov, July 2021.
Back to report -
52. HM Government. 2021 Online Media Literacy Strategy. See https://www.gov.uk/government/publications/online-media-literacy-strategy (accessed 4 November 2021).
Back to report -
53. Royal Society roundtable with Safety Technology Organisations, March 2021.
Back to report -
54. HM Government. 2021 Draft Online Safety Bill. See https://www.gov.uk/government/publications/draft-online-safety-bill (accessed 4 November 2021).
Back to report -
55. HM Government. 2020 Online Harms White Paper: Full government response to the consultation. See https://www. gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response (accessed 4 November 2021).
Back to report -
56. UK Parliament. 2020 Misinformation in the COVID-19 Infodemic: Government Response to the Committee’s Second Report. See: https://publications.parliament.uk/pa/cm5801/cmselect/cmcumeds/894/89402.htm (accessed 4 November 2021).
Back to report -
57. See Chapter 2: ‘Policies adopted by major online platforms’.
Back to report -
58. The controversial COVID-19 ‘Ask the Experts’ which discourages the use of vaccines is available on BrandNewTube and has 376,000 views, BrandNewTube, accessed September 2021.
Back to report -
59. COVID-19: Waterstones and Amazon urged to add warning tags as anti-vaccination book sales surge. Sky News. 5 March 2021. See https://news.sky.com/story/waterstones-and-amazon-urged-to-add-warning-tags-as-anti-vaccination-book-sales-surge-12234972 (accessed 4 November 2021).
Back to report -
60. 4-5% do not believe the COVID-19 vaccines are safe and 5% do not believe humans activity is responsible for climate change. 5% believe 5G technology is very harmful to human health, however a further 10% believe it is fairly harmful to human health. Royal Society / YouGov, July 2021.
Back to report -
61. Royal Society roundtable with Major Technology Organisations, March 2021.
Back to report -
62. Reuters Institute for the Study of Journalism. 2018 Digital News Report. See https://www.digitalnewsreport.org/ survey/2018/ (accessed 4 November 2021).
Back to report -
63. Funke D. 2017 Here’s why fighting fake news is harder on WhatsApp than on Facebook. Poynter. See https://www.poynter.org/fact-checking/2017/here%C2%92s-why-fighting-fake-news-is-harder-on-whatsapp-than-on-facebook/ (accessed 4 November 2021).
Back to report -
64. The Royal Society. 2016 Progress and research in cybersecurity: Supporting a resilient and trustworthy system for the UK. See https://royalsociety.org/-/media/policy/projects/cybersecurity-research/cybersecurity-research-report.pdf (accessed 4 November 2021).
Back to report -
65. Bronstein M, Bruna J, LeCun Y, Szlam A, Vandergheynst P. 2017 Geometric Deep Learning: Going beyond Euclidean data. IEEE Signal Processing Magazine. 34, 18-42. (https://doi.org/10.1109/MSP.2017.2693418).
Back to report -
66. WhatsApp. About forwarding limits. See https://faq.whatsapp.com/general/chats/about-forwarding-limits/?lang=en (accessed 4 November 2021).
Back to report -
67. How Line is fighting disinformation without sacrificing privacy. Rest of World, 7 March 2021. See https://restofworld. org/2021/how-line-is-fighting-disinformation-without-sacrificing-privacy/ (accessed 4 November 2021).
Back to report -
68. WHO Health Alert brings COVID-19 facts to billions via WhatsApp. World Health Organization. 20 March 2021. See https://www.who.int/news-room/feature-stories/detail/who-health-alert-brings-covid-19-facts-to-billions-via-whatsapp (accessed 4 November 2021).
Back to report -
69. McAuley D, Koene A, Chen J. 2020 Response to the Royal Society Call for Evidence: Technologies for Spreading and Detecting Misinformation. (https://doi.org/10.17639/wvk8-0v11).
Back to report -
70. Content Authenticity Initiative. How it works. See https://contentauthenticity.org/how-it-works (accessed 4 November 2021).
Back to report -
71. Reuters Institute for the Study of Journalism. 2018 Digital News Report. See https://www.digitalnewsreport.org/ survey/2018/ (accessed 4 November 2021).
Back to report -
72. Omand D, Bartlett J, Miller C. 2012 Introducing social media intelligence (SOCMINT). Intelligence and National Security. 27, 801-823. (https://doi.org/10.1080/02684527.2012.716965).
Back to report -
73.Arguedas A, Robertson C, Fletcher R, Nielsen R. 2021 Echo chambers, filter bubbles, and polarisation. The Royal Society. See https://royalsociety.org/topics-policy/projects/online-information-environment
Back to report -
74. The Alan Turing Institute. Data safe havens in the cloud. See https://www.turing.ac.uk/research/research-projects/ data-safe-havens-cloud (accessed 4 November 2021).
Back to report -
75. Harvard University. Social Science One: Building Industry Academic Partnerships. See https://socialscience.one/ (accessed 4 November 2021).
Back to report -
76. Statement from Social Science Research Council President Alondra Nelson on the Social Media and Democracy Research Grants Program. Social Science Research Council. 27 August 2019. See https://www.ssrc.org/programs/social-data-initiative/social-media-and-democracy-research-grants/statement-from-social-science-research-council-president-alondra-nelson-on-the-social-media-and-democracy-research-grants-program/ (accessed 4 November 2021).
Back to report -
77. Facebook Said It Would Give Detailed Data To Academics. They’re Still Waiting. BuzzFeed News. 22 August 2019. See https://www.buzzfeednews.com/article/craigsilverman/slow-facebook (accessed 4 November 2021).
Back to report -
78. Health Data Research UK. Trusted Research Environments. See https://www.hdruk.ac.uk/access-to-health-data/ trusted-research-environments/ (accessed 4 November 2021).
Back to report -
79. The Royal Society. Privacy Enhancing Technologies. See https://royalsociety.org/topics-policy/projects/privacy-enhancing-technologies/ (accessed 4 November 2021).
Back to report -
80. Royal Society roundtable with Major Technology Organisations, March 2021.
Back to report -
81. Ibid.
Back to report -
82. SFU Discourse Lab. MisInfoText. See https://github.com/sfu-discourse-lab/MisInfoText (accessed 4 November 2021).
Back to report -
83. Khan J, Khondaker M, Afroz S, Uddin G, Iqbal A. 2021 A benchmark study of machine learning models for online fake news detection. Machine Learning with Applications. 4. (https://doi.org/10.1016/j.mlwa.2021.100032).
Back to report -
84. Assurance, trust, confidence – what does it all mean for data? Open Data Institute.18 June 2021. See https://theodi. org/article/assurance-trust-confidence-what-does-it-all-mean-for-data/ (accessed 4 November 2021).
Back to report -
85. Ciampaglia G. 2017 Fighting fake news: A role for computational social science in the fight against digital misinformation. Journal of Computational Social Science. 1, 147-153. (https://doi.org/10.1007/s42001-017-0005-6).
Back to report -
86. Bellingcat. Bellingcat’s Online Investigation Toolkit. See https://docs.google.com/spreadsheets/d/18rtqh8EG2q1xBo2c LNyhIDuK9jrPGwYr9DI2UncoqJQ/edit#gid=930747607 (accessed 4 November 2021).
Back to report -
87. Revez J, Corujo L. 2021 Librarians against fake news: A systematic literature review of library practices (Jan 2018 – September 2020). The Journal of Academic Librarianship. 47. (https://doi.org/10.1016/j.acalib.2020.102304).
Back to report -
88. Guess et al. 2020 A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences July 2020. 117, 15536-15545. (https://doi.org/10.1073/pnas.1920498117).
Back to report -
89. Ofcom. 2020 News Consumption in the UK. See https://www.ofcom.org.uk/ data/assets/pdf_file/0013/201316/news-consumption-2020-report.pdf (accessed 4 November 2021).
Back to report -
90. Reuters Institute for the Study of Journalism. 2021 Digital News Report. See https://reutersinstitute.politics.ox.ac.uk/ digital-news-report/2021 (accessed 4 November 2021).
Back to report -
91. Reuters Institute for the Study of Journalism. 2012 News Plurality in a Digital World. See https://reutersinstitute.politics.ox.ac. uk/sites/default/files/2017-11/News%20Plurality%20in%20a%20Digital%20World_0.pdf (accessed 4 November 2021).
Back to report -
92. Open Society Foundations. 2014 Digital journalism: Making news, breaking news. See https://www. opensocietyfoundations.org/uploads/02fc2de9-f4a5-4c07-8131-4fe033398336/mapping-digital-media-overviews-20140828.pdf (accessed 4 November 2021).
Back to report -
93. Ofcom. 2021 The Future of Media Plurality in the UK. See https://www.ofcom.org.uk/ data/assets/pdf_ file/0012/220710/media-plurality-in-the-uk-condoc.pdf (accessed 4 November 2021).
Back to report -
94. Examples: IFLScience, Rest of World, UNILAD Tech.
Back to report -
95. Mosseri A. 2018 Helping ensure news on Facebook is from trusted sources. Facebook. 19 January 2018. See https://about.fb.com/news/2018/01/trusted-sources/ (accessed 4 November 2021).
Back to report -
96. Facebook is changing news feed (again) to stop fake news. Wired. 4 October 2019. See https://www.wired.com/story/ facebook-click-gap-news-feed-changes/ (accessed 4 November 2021).
Back to report -
97. Tsfati Y, Boomgaarden H, Strömbäck J, Vliegenthart R, Damstra A, Lindgren E. 2019 Causes and consequences of mainstream media dissemination of fake news: literature review and synthesis. Annals of the International Communication Association. 44, 157-173. (https://doi.org/10.1080/23808985.2020.1759443).
Back to report -
98. The Cairncross Review. 2019 A sustainable future for journalism. See https://www.gov.uk/government/publications/ the-cairncross-review-a-sustainable-future-for-journalism (accessed 4 November 2021).
Back to report -
99. ‘We were wrong’: US news rating tool boosts Mail Online trust ranking after talks with unnamed Daily Mail exec. PressGazette. 31 January 2019. See https://www.pressgazette.co.uk/we-were-wrong-us-news-rating-tool-boosts-mail- online-trust-ranking-after-talks-with-unnamed-daily-mail-exec/ (accessed 4 November 2021).
Back to report -
100. Wikipedia bans Daily Mail as ‘unreliable’ source. The Guardian. 8 February 2017. See https://www.theguardian.com/ technology/2017/feb/08/wikipedia-bans-daily-mail-as-unreliable-source-for-website (accessed 4 November 2021).
Back to report -
101. Bailo F, Meese J, Hurcombe E. 2021 The Institutional Impacts of Algorithmic Distribution: Facebook and the Australian News Media. Social Media + Society. 7. (https://doi.org/10.1177%2F20563051211024963).
Back to report -
102. The Cairncross Review. 2019 A sustainable future for journalism. See https://www.gov.uk/government/publications/ the-cairncross-review-a-sustainable-future-for-journalism (accessed 4 November 2021).
Back to report -
103.Polizzi G. 2020 Fake news, Covid-19 and digital literacy: Do what experts do. London School of Economics. 17 June 2020. See https://blogs.lse.ac.uk/medialse/2020/06/17/fake-news-covid-19-and-digital-literacy-do-what-the-experts-do/ (accessed 4 November 2021).
Back to report -
104. Ibid.
Back to report -
105. Microsoft. Spot the Deepfake. See https://www.spotdeepfakes.org/en-US (accessed 4 November 2021).
Back to report -
106. Moore R, Hancock J. 2020 Older Adults, Social Technologies, and the Coronavirus Pandemic: Challenges, Strengths, and Strategies for Support. Social Media + Society. (https://doi.org/10.1177%2F2056305120948162).
Back to report -
107. HM Government. 2020 Government cracks down on spread of false coronavirus information online. See https://www.gov.uk/government/news/government-cracks-down-on-spread-of-false-coronavirus-information-online (accessed 4 November 2021).
Back to report -
108. HM Government. Check Before You Share Toolkit. See https://dcmsblog.uk/check-before-you-share-toolkit/ (accessed 4 November 2021).
Back to report -
109. European Commission. 2018 Final report of the High Level Expert Group on Fake News and Online Disinformation. See https://www.ecsite.eu/activities-and-services/resources/final-report-high-level-expert-group-fake-news-and-online (accessed 4 November 2021).
Back to report -
110. House of Commons Digital, Culture, Media and Sport Committee. 2019 Disinformation and ‘fake news’: Final Report. See https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf (accessed 4 November 2021).
Back to report -
111. House of Lords Select Committee on Democracy and Digital Technologies. 2020 Digital Technology and the Resurrection of Trust. See https://committees.parliament.uk/publications/1634/documents/17731/default/ (accessed 4 November 2021).
Back to report -
112. The Alan Turing Institute. 2021 Understanding vulnerability to online misinformation. See https://www.turing.ac.uk/sites/ default/files/2021-02/misinformation_report_final1_0.pdf (accessed 4 November 2021).
Back to report -
113. HM Government. 2021 Online Media Literacy Strategy. See https://www.gov.uk/government/publications/online-media-literacy-strategy (accessed 4 November 2021).
Back to report -
114. 61% believe the internet has made the public’s understanding of science better. Royal Society / YouGov, July 2021.
Back to report -
115. OECD. 2015 Making Open Science a Reality. See https://www.oecd-ilibrary.org/science-and-technology/making-open-science-a-reality_5jrs2f963zs1-en (accessed 4 November 2021).
Back to report -
116. This includes open publishing and open data, see https://royalsociety.org/topics-policy/projects/science-public-enterprise/report/ (accessed 10 November 2021).
Back to report -
117. The Royal Society. 2021 The Royal Society sets 75% threshold to ‘flip’ its research journals to Open Access over the next five years. See https://royalsociety.org/news/2021/05/royal-society-open-access-plans/ (accessed 4 November 2021).
Back to report -
118. UNESCO. Open access to facilitate research and information on COVID-19. See https://en.unesco.org/covid19/ communicationinformationresponse/opensolutions (accessed 4 November 2021).
Back to report -
119. Kiley R. 2020 Three lessons COVID-19 has taught us about Open Access publishing. London School of Economics. 6 October 2020. See https://blogs.lse.ac.uk/impactofsocialsciences/2020/10/06/39677/ (accessed 4 November 2021.
Back to report -
120. European Molecular Biology Laboratory. 2020 Open data sharing accelerates COVID-19 research. See https://www.ebi. ac.uk/about/news/announcements/open-data-sharing-accelerates-covid-19-research (accessed 4 November 2021).
Back to report -
121. UK Research and Innovation. 2021 UKRI announces new Open Access Policy, UK Research and Innovation. See https://www.ukri.org/news/ukri-announces-new-open-access-policy/ (accessed 4 November 2021).
Back to report -
122. Soderberg C, Errington T, Nosek B. 2020 Credibility of preprints: an interdisciplinary survey of researchers. Royal Society Open Science. 7, 201520. (https://doi.org/10.1098/rsos.201520).
Back to report -
123. Aksnes D, Langfeldt L, Wouters P. 2019 Citations, citation indicators, and research quality: An overview of basic concepts and theories. SAGE Open. (https://doi.org/10.1177%2F2158244019829575).
Back to report -
124. Gooding P, Terras M, Berube L. 2019 Towards user-centric evaluation of UK non-print legal deposit: A digital library futures White Paper. Digital Library Futures. See http://eprints.gla.ac.uk/186755/ (accessed 4 November 2021).
Back to report