The online information environment

A report by the Royal Society on the impact of the internet on our information environment, and on misinformation relating to scientific issues

How are digital technologies changing the way people interact with information?  What technologies are there that can fabricate and detect misinformation?  And what role does technology have to play in creating a better information environment?

The online information environment (PDF) report addresses these questions, providing an overview of how the internet has changed, and continues to change, the way society engages with scientific information, and how it may be affecting people’s decision-making behaviour – from taking up vaccines to responding to evidence on climate change. It highlights key challenges for creating a healthy online information environment and makes a series of recommendations for policymakers, academics, and online platforms.

How are digital technologies shaping the information people encounter?

Patterns of information consumption are changing: individuals increasingly look to the online environment for news, and search engines and social media platforms play an increasingly important role in shaping access to information and participation in public debates. New technologies and uses of data are shaping this online information environment, whether through micro-targeting, filter bubbles, or sophisticated synthetic text, videos and images. 

These technologies have great potential and are already being deployed in a range of contexts from entertainment through to education.  At the same time, there are increasing concerns about new forms of online harm and erosion of trust that these could enable.

While misinformation isn’t a new problem—and uncertainty and debate are intrinsic parts of science--the internet has drastically magnified the speed and scale at which poor quality information can spread.

The report highlights how online misinformation on scientific issues, like climate change or vaccine safety, can harm individuals and society.   It stresses that censoring or removing inaccurate, misleading and false content, whether it’s shared unwittingly or deliberately, is not a silver bullet and may undermine the scientific process and public trust.  Instead, there needs to be a focus on building resilience against harmful misinformation across the population and the promotion of a “healthy” online information environment.

The edge of error

Professor Frank Kelly FRS, Professor of the Mathematics of Systems at the Statistical Laboratory, University of Cambridge, and Chair of the report said, “Science stands on the edge of error and the nature of the scientific endeavour at the frontiers means there is always uncertainty.

“In the early days of the pandemic, science was too often painted as absolute and somehow not to be trusted when it corrects itself, but that prodding and testing of received wisdom is integral to the advancement of science, and society.

“This is important to bear in mind when we are looking to limit scientific misinformation’s harms to society. Clamping down on claims outside the consensus may seem desirable, but it can hamper the scientific process and force genuinely malicious content underground.”

Perspectives

Alongside the publication of this report, the Society is launching a blog series of weekly perspective pieces offering personal takes from leading figures on specific aspects of this topic, from potential regulatory approaches to what the media is doing to combat fake news to the role of knowledge institutions.

Common questions

What is scientific misinformation?

Scientific misinformation is defined as information which is presented as factually true but directly counters, or is refuted by, established scientific consensus. This includes concepts such as ‘disinformation’ which relates to the deliberate sharing of misinformation content.

Why do people share misinformation?

The actors involved in producing and disseminating misinformation content can be broadly categorised as intentional or unintentional actors, and further differentiated by motivation. These actors can exist across all sections of society and often include those in positions of power and influence (e.g. political leaders, public figures, and media outlets). We identify four types of misinformation actors:

  • Good Samaritans: These users unknowingly produce and share misinformation content. Their motivation is to help others by sharing useful information which they believe to be true. Examples of this could include unknowingly sharing an ineffective health treatment or an inaccurate election schedule.
  • Profiteers: These users either knowingly share misinformation content or are ambivalent about the content’s veracity. The consumption of their content generates profit for them with greater engagement resulting in higher profit. Examples include writers for explicitly false news outlets being paid directly to a Google Ads account, companies selling fraudulent health treatments, and video content creators profiting from advertising revenue. Profit, in this context, is not restricted to monetary value and can include other forms of personal gain (e.g. more votes or greater reach).
  • Coordinated influence operators: These users knowingly produce and share misinformation content. Their motivation is to sway public opinion in a manner that will benefit the agenda of their organisation, industry, or government. The aim is to either convince consumers of an alternate story or to undermine faith in trusted institutions. Examples include successfully publishing political opinion pieces by a fabricated expert in reputable online news outlets and using automated social media accounts (bots) to promote climate change denialism.
  • Attention hackers: These users knowingly produce and share misinformation content. Their motivation is personal joy. Sometimes referred to as ‘trolling’, these users devise outlandish or divisive content and take steps to maximise attention for them. Examples include sending messages to mainstream talk shows in the hope they will read out the content on air, fooling high profile figures into resharing content on their social media accounts, and sharing conspiracy theories on unsuspecting television and radio phone-ins (known as groyping).

What is malinformation?

Genuine, unedited content can be shared without context to provide a misleading narrative. This is made easier in the online information environment as content can be disseminated between people without intermediaries (e.g. news outlets, government officials). This has been referred to as ‘malinformation’. Examples include sharing real images and claiming that they represent something that they do not. They can also involve sharing images of different events from a different date to create a false narrative and discredit targets.

What is a deepfake?

Originating from a Reddit user who shared edited videos of celebrity faces swapped into pornographic videos, deepfakes refer to novel audio and/or visual content generated using artificial intelligence techniques such as generative adversarial networks (GANs). GANs involve two neural networks competing against each other – one creating false content and the other trying to detect it. The GANs can be trained using images, sounds, and videos of the target. The result is convincingly edited ‘new’ audio and/or visual content.

Deepfakes can involve portraying individuals doing or saying things which they never did or said. They can also involve the generation of a new ‘person’ – a still image of a novel face to be used in the creation of a fabricated online persona.  Research has found that the majority of the thousands of deepfakes currently in existence are of a pornographic nature, however other examples have included deepfakes of politicians, campaigners, celebrities, and the Queen.

What are shallowfakes?

A form of malinformation content, shallowfakes refer to videos which have been presented out of context or crudely edited. These effects are achieved by using video-editing software or smartphone applications to change the speed of video segments or crop together clips in order to omit relevant context.

What is provenance enhancing technology?

Focusing instead on the origins of content rather than its value, organisations developing provenance enhancing technologies aim to equip information consumers with the means to help them decide whether a piece of content is genuine and not manipulated. This is achieved by applying the content’s metadata (e.g. sender, recipient, time stamp, location) to determine who created it, how it was created, and when it was created.

This is the primary aim of the Coalition for Content Provenance and Authenticity (an initiative led by Adobe, ARM, the BBC, Intel, Microsoft, TruePic, and Twitter) which is developing a set of technical specifications on content provenance. If sufficiently enabled, platforms would be able to better address or label problematic content and information consumers will be able to determine the veracity of a claim, image, or video.

What are bots?

Bots are pre-programmed online accounts that engage with and respond to online content in an automated fashion. They take many forms in the online information environment. Chatbots can act as customer service operators for large companies (e.g. retail banks) or as false personas on social media platforms. Voice assistants recognise and respond verbally to spoken requests made by a user to a smart device. Bot crawlers undertake basic administrative tasks for owners such as indexing web pages or countering minor acts of vandalism on wikis. Traffic bots exist to inflate the number of views for a piece of online content to boost revenue derived from online advertising.
Positive applications of bots include their use to counter misinformation, to support people with disabilities, and to disseminate news updates. Negative applications include the use of bots to deceptively influence public opinion, to suppress news stories, and to abuse people.

Method

Alongside the report, the Society has published a series of independent commissioned literature reviews and surveys which have helped inform the thinking of the expert working group on these issues. These include: