How do you enforce regulations across borders when the people who are creating misinformation are in countries that don’t have the same laws or values?

Iterations of a Generative Adversarial Network AI learning to create abstract art

In 2019, I began examining the implications of an emerging technological phenomenon that some academics termed "synthetic reality". With the help of artificial intelligence, people were producing artificial images and videos that had been created so seamlessly it made them look real. These became more widely known as deepfakes.

Although it has been possible to manipulate images – often in ways that can convince others they are real – for almost as long as photography itself, AI techniques such as machine learning have made the task of creating incredibly lifelike fakes much easier. It is now possible to produce completely synthetic videos of other people that look, move and speak like the real thing. It is easy to see how such technology might be useful in the film industry for CGI effects, for example, but it also has the potential to be abused. Some uses may not be objectively harmful but are controversial or simply “creepy” – for example, reactions to postmortem chatbots or Harry Potter-style moving photos of people who are dead have varied from enthralled to horrified.

The ability to create fake footage of someone doing or saying something that never happened could be exploited for blackmail, identify theft, revenge porn, commercial gain or sponsorship, or simply to harm someone’s reputation. While deepfakes are arguably just an extension of the wider problem of fake news and disinformation, they have particularly alarmed those in the political sphere who fear they could be used to discredit politicians or disrupt elections.

Fortunately, here in the UK at least, deepfakes have not been the problem in politics that many people expected them to be (although some faked footage of political leaders was created before the last election, it was part of a campaign to raise awareness of deepfakes). But synthetic videos are a more serious, and growing problem as a source of fake pornography that targets celebrities or as revenge porn. There is also growing awareness beyond the adult industries that deepfakes may be a way to surf on the glitter of celebrity without necessarily obtaining their ethical or paid collaboration – the recent faked video of Tom Cruise on TikTok fooled many and operates in a globally grey area of celebrity image rights.

This potential for deepfakes to harm reputations and sow doubt in what material we can trust has led legislators to seek ways of regulating them and the wider problem of disinformation. But the internet is not some kind of lawless Wild West that needs newly tamed every time a new technology appears. We already have a vast amount of legislation that regulates content in whatever medium it appears. If used in the right way, laws on data protection, copyright, pornography, spam, race hate, malicious communications, misleading advertising, libel, fraud , equality legislation and image rights might all be used to regulate material on the internet.

This isn’t to say the law won’t need tweaks. The US has taken the vanguard with the state of Virginia being first to make it a criminal offense to distribute non-consensual deepfake pornography, while Texas and California have both prohibited the creation and distribution of deepfake videos intended to harm those in the political sphere before elections. A bill for a federal Malicious Deep Fake Prohibition Act of 2018, however, didn’t even make it to a vote in congress. The UK will probably also soon have legislation to toughen up its response to a number of the internet’s negative aspects, including disinformation, trolling and abuse.

There are obvious challenges when it comes to the international nature of the internet – how do you enforce regulations across borders when the people who are creating misinformation, often rather careless of the law, are in countries that don’t have the same laws or values or aren’t well resourced for enforcement?

Regulating platform power to prevent the spread of disinformation

One solution to this issue of global enforcement is to look away from content creators, and towards regulating the platforms that host their content, as a more efficient way to do more about disinformation. We live in a world where what we see online is largely hosted by global platforms and audience attention determined by their competing algorithms. And the people who control those algorithms belong to the giant internet platforms – the Googles, Facebooks and Twitters.

In the past, both the UK and the EU have opted to encourage the large internet platform companies to self-regulate fake news, for example by signing them up to a Code of Practice on Disinformation. This EU initiative commits the companies, including Facebook, Google, Twitter, TikTok and Microsoft, to close down fake accounts and demonetise the spread of disinformation. Recent events such as the rise of fake news during the last US election and the spread of COVID-19 conspiracy theories during the pandemic have however persuaded many governments that these approaches don’t go far enough.

One leading approach has been adopted by the British Government initially in an Online Harms White Paper. It envisages imposing a “duty of care” on platforms in relation to harmful content on social media and grants the broadcasting regulator Ofcom new powers to act as regulator. A draft Online Safety Bill ( OSB)is now inching slowly towards law.  Provisionally, deepfakes alongside fake news and disinformation more widely (such as anti-vaccination propaganda) might come within this remit, although they are not mentioned in the latest committee report ( see below). While most of the Bill is well intended, commentators worry that content which is unwelcome but legal might be censored without due process as a result. The Parliamentary Online Safety Committee identified in December 2021 a key structural concern that the OSB is more concerned with content moderation at the expense of regulating more fundamental harms arising from algorithmic amplification and targeting. They also  identify a weakness in the OSB’s stubborn refusal to look at general social harms as opposed to content targeting specific individuals.

The European Union by contrast has drafted a Digital Services Act (DSA) and Digital Markets Act ( DMA) that build on existing e-commerce rules to force online platforms to take more responsibility for the content they serve up to their users. One particularly interesting aspect to the DMA is that it partly relies on competition law-like remedies to make platforms more transparent, interoperable and accountable. Letting users take their data to other platforms incentivises a more responsive market to user needs. In particular, platforms depend on advertising revenue, which is increasingly generated by algorithms serving up viral extreme content and fake news. The DSA will require audit of the algorithms of the largest platforms to stop them automatedly leading users to the most vicious and misleading content just to make money. Most recently the EU has also set out a proposal to  make targeted political advertising, which might include “fake news” substantially more transparent.

Importantly, when it comes to disinformation, we know that these platforms do have levers they can pull to reduce the circulation of content that is fake, and to close the accounts of those who are spreading it. Facebook and Twitter both, eventually, chose to deplatform then President Trump after he declared the US election “stolen” and appeared to incite against democracy, a difficult choice which was subsequently approved in a controversial semblance of  judicial due process by Facebook’s own internal oversight board. Although many would applaud these actions, they highlight worries over whether private platforms should have such immense powers to control speech rather than elected governments, or courts.

The UK and EU legislative proposals are for the first time seeking to create general wide-ranging solutions to the problem of platform power and specifically, how they drive the flow and uptake of disinformation. How successful these turn out to be remains to be seen (alongside a growing worry that these new regulations will be easily finessed by the largest richest platforms but act as a deterrent to new entrants who might successfully compete with the dominant surveillance capitalism business model), The EU approach with its emphasis on algorithmic amplification and dissemination of content, rather than content per se,  as well as its use of competition levers, so far appears more sophisticated than the proposed UK law, though this may change in the future as the Competition and Markets Authority expand their range of tools and the shape of the OSB becomes firmer. As always, there is a fine line between regulation and censorship, and between free markets and social control. Any country that values free speech, plurality of opinion and democracy would do well to remember that.

Further reading

This blog is one of a series of perspective pieces published to support the Royal Society's Online information environment report, which provides an overview of how the internet has changed, and continues to change, the way society engages with scientific information, and how it may be affecting people’s decision-making behaviour.  

Authors

  • Lilian Edwards

    Lilian Edwards

    Chair in Law, Innovation and Society at Newcastle University
    Lilian is a leading academic in the field of internet law. She is the editor and major author of Law, Policy and the Internet, one of the leading textbooks in the field of Internet law and is a partner in the Horizon Digital Economy Hub at Nottingham, the lead for the Alan Turing Institute on Law and AI, and a fellow of the Institute for the Future of Work.