Dr Vint Cerf ForMemRS, one of the founding fathers of the internet, defends the open nature of the internet and explains why critical thinking is key to a more desirable information environment.
Access to the Internet has reached a little over half the population of the planet and continues to grow. In the 1990s, the publicly accessible Internet was largely a dial-up environment, capable of handling tens of kilobits per second. Today, access speeds over a gigabit per second are available, and smartphones have become nearly ubiquitous, with more of these devices in use than there are people on the planet. For those with access, the Internet has become many people’s primary source of information. This is largely due to one of the Internet’s greatest strengths – its open nature. This has allowed significant levels of innovation and freedom of expression to flourish. Anyone and everyone able to get online is theoretically free to blog, create web pages, post pictures and video, send emails, add comments to websites and search the web for content.
But it is also fair to say that this open platform has also produced an environment where those who wish to do harm to others can also flourish. Hackers intent on disrupting services, stealing information and taking other people’s money have also been able to exploit the Internet. As we move into a world where more and more of the devices, infrastructure and services around us are controlled on Internet connected networks – particularly with the arrival of the Internet of Things – we will see this threat continue to grow.
There is, however, now solid evidence for a somewhat more subtle threat to Internet users – the creation and spread of misinformation and disinformation. Its targets can range from personal bullying, sometimes with tragic and fatal consequences, to the significant intervention in elections by injecting divisive content intended to sow hatred, incite friction and encourage violence.
Information is very quickly picked up, shared and replicated over the Internet – a story can be repeated on multiple social networking feeds, news sites and blogs in a short space of time. Ironically, conspiracy theories and bad news propagates faster than good news. This is not surprising as bad news often contains implicit or explicit warnings of danger and our societies have evolved to share alerts quickly as a safety measure. As is well known in propaganda circles, repetition of false information can make it more believable and helps to reinforce the persistence of fake news and misinformation. With the help of a form of artificial intelligence called machine learning, it is now possible to create entirely fictitious images, spoken audio and video that are indistinguishable from the real thing. One can readily imagine the damage that can be done to national, corporate and personal reputations by these so-called “deep fakes”.
The Internet is also intended to operate without borders – its addressing structure is based on the topology of interconnected networks, rather than geographical location. This allows information created in one country to be readily accessed in another. But this structure also allows the perpetrators to hide behind a veil of anonymity or pseudonymity. In an age of instant access to information and ability to propagate information globally with the click of a mouse, we are experiencing a global, digital Wild West. Everyone so equipped can fire their .45 calibre digital weapons at any target with very little consequence.
With all this in mind, I am sometimes asked that if, given the chance to start again, what changes I would choose to make to the Internet’s design to make it a safer, more trustworthy place. For me, this question misses the point. To make the Internet’s structure any less open would also rob us of so many of the positives it has brought. The freedom to access information and share it is what has made it so great.
If we try to filter out the falsehoods and misrepresentations, at what point does that become censorship? Where do we draw the lines about what we consider to be “bad” information? We might disagree with something, but does that necessarily make it wrong?
It is also tempting to try to invent automated tools for detecting misinformation. With 400 hours of video uploaded to YouTube every minute, it is beyond the ability of humans to look at it all and do something about it. But this isn’t as easy as it sounds. Even trying to detect the actions of bots – automated algorithms that click “likes” or repost content on social networking sites like Facebook and Twitter – is not easy. Yet, these can not only spread misinformation more quickly, but also lead to the mistaken belief that it is popular and credible.
We could have some post-hoc enforcement, where we tell people that if they are caught creating or spreading misinformation, there will be consequences, but that will require cross border cooperation that currently does not exist. A third possible response could be moral-suasion, where society adopts norms that oppose the spread of misinformation, then the social pressure to behave according to those norms may have a moderating effect.
I don’t have perfect answers on how to deal with this – it is a problem we need to confront as a society. Misinformation can be found in books, newspapers, magazines, television and radio broadcasts, conversations with friends and random public billboards!
Perhaps we already have some of the tools we need to deal with this problem. Inside all of our heads is something I tend to refer to as “wetware”. Far more than the hardware and software we use on computers, our brains are the best tool we have for dealing with misinformation.
Critical thinking can go a long way towards dissipating the deleterious side-effects of social network manipulation. Of course, critical thinking takes work – it involves asking questions about where content comes from, who is making a claim, what motivation they might have to misrepresent it and whether we can find any corroborating evidence to support an assertion. These are the kinds of questions we should all learn to ask when looking at any information we might find online and elsewhere. They are already the essence of the scientific method, so perhaps we should train people to think in this way when they use the Internet too.
Extraordinary claims should be accompanied by extraordinary evidence, and we should be naturally suspicious of any information offered without an apparent source or basis. This to me seems to be an essential notion for filtering truth from the ocean of content we can now find on the Internet.
Further reading
This blog is one of a series of perspective pieces published to support the Royal Society's Online information environment report, which provides an overview of how the internet has changed, and continues to change, the way society engages with scientific information, and how it may be affecting people’s decision-making behaviour.