Artificial Intelligence and ethics

Artificial intelligence (AI) has advanced with unprecedented speed, with the potential to transform our lives at home, work, in health and education. AI is also transforming the methods and nature of scientific research. The wide-ranging impact of this technology makes it essential that it is used responsibly and not to cause harm to individuals or society. This means considering ethics and following ethical principles to guide the development and use of AI, ranging from regulatory frameworks to encouraging a culture of ethics and values around AI.

What is AI ethics?

Artificial intelligence (AI) has the potential to bring significant societal and economic benefits, but could also cause considerable harm or disruption to people’s lives. The ethics of AI involves considering the consequences this technology may have on society and how it can be used fairly, transparently and appropriately. Key considerations include privacy and consent in the data used by AI algorithms as well as mechanisms to minimise bias in the results AI produces. If these things are not considered early, AI has the potential to cause harm –  even if unintentionally – to individuals, communities and society as a whole. The Royal Society’s report, Machine learning: the power and promise of computers that learn by example, advocates teaching ethical concepts around machine learning in universities and at schools from a young age to ensure they become hardwired into the development process of AI algorithms. 

While much of the attention around AI focuses on its potential benefits, there are also important conversations to be had about how this rapidly evolving technology will affect individuals, organisations and society. As part of its work on AI and machine learning, the Royal Society has been engaging with the public to learn more about how it views these technologies. There is both enormous hope in the potential of AI to improve human lives, as well as well-founded concerns about the technology – from supplanting people at work and propagating harmful biases to fears about how personal data will be used and causing unintentional harm to individuals.

AI technologies are similarly transforming the nature and methods of scientific inquiry, offering opportunities to tackle data that were never previously possible. But it has also raised concerns around how AI can be used fairly and safely in the scientific process.

This requires conversations and debate at every level of society about the ethics of AI to develop guiding principles for how this technology is developed and used. 

There are a number of ethical concerns around the development and use of AI. At a high level, these break down to concerns about how these models are trained, how they are developed and deployed, and the highest-risk harms that powerful AI could be capable of, sometimes referred to as ‘existential risk.’

Most training concerns relate to the data that goes into AI algorithms and the transparency of the process. This includes considering what biases might be embedded in the data AI models are trained on; whether this data contains sensitive and personal information; and if training models make use of copyrighted material.

In terms of development, it is important to make AI systems explainable and accountable, such that humans can understand how a model reached a particular conclusion. This can help to identify errors and will be particularly important as AI is applied in critical systems such as healthcare, law, and government. On deployment, a key debate is who bears responsibility when an AI does harm – should it, for instance, be the engineers who developed the model, the developers who built an app on top of it, or the end-users who applied the system? Such questions of liability are contentious and unresolved debates in AI, and of particular relevance when it comes to designing regulation.

Finally there are the highest-level societal risks from AI, often called ‘existential risk.’ These range from concerns about AI replacing human jobs to it being used to develop dangerous weapons or tools of war.

  • Bias and objectivity – AI algorithms can reflect the biases in the data they are trained upon and of the people who build them. Exploring ways of countering that, such as by improving diversity in training datasets and the AI workforce, is necessary.
  • Data use and privacy – The data used by AI algorithms can include personal information, so it needs to be used in a way that doesn’t infringe people’s rights to privacy. As AI strays into creative areas, the copyright and ownership of the material it ingests and produces is also contentious. 
  • Transparency – Making AI explainable and accountable is important so that it is possible to understand how an algorithm or system reached a particular conclusion. This can help to identify errors, and will likely to be particularly important as AI is used in healthcare, the legal system and on our roads.
  • Harm and liability – If an AI system does harm someone, there are questions about who should accept responsibility. Should it be the engineers who built the AI model, the people who trained it or those who are using it?  In some cases this can include tricky moral dilemmas, such as an autonomous vehicle that has to choose between actions that might harm those inside the vehicle or other road users around it. 
  • Societal harm – New uses of AI could replace human jobs, which may require measures to support and retrain those affected, or steps to protect human roles. There is also the risk of exploitation as low-paid data workers are used to train AI models – a vital task in developing these systems.
  • Warfare and weapons – The development of autonomous weapons powered by AI raises wide-ranging ethical questions about removing humans from life-and-death decisions. There are also issues around the potential for misuse and regulation of such weapons.

Policymakers are increasingly concerned with developing guiding rules and guiding principles to ensure AI is developed and used in an ethical way. The UK Government, the EU and a number of other countries have outlined broad principles they believe should underpin the governance of AI. As these frameworks continue to develop, it is important that they consider contributions from across the research community, industry, and wider society, and are prepared to address the central issues of transparency, fairness, and responsibility. Once in place, such principles will help to steer future work on AI and to ensure a culture of ethics. 

The Royal Society report on Science in the age of AI explores how this technology is altering the way research is being conducted and what this means for the scientific community. It highlights several areas where AI could lead to ethical concerns within scientific research, including bias being introduced by the use of AI tools, hallucinations and false information being generated by AI, and the poisoning of data. The report also raises concerns about the potential misuse of AI in science, the environmental costs of AI and the impact of AI on humans, and identifies opportunities for the research community to proactively identify and address these potential harms.

The Society also published a landmark study on machine learning in 2017, which highlighted ways the ethical and social implications of AI could be integrated into governance and education systems. For example, it recommended that machine learning concepts, including ethics, be incorporated into education from a young age and that postgraduate students in machine learning should receive training in ethics as part of their studies. This would help to better prepare the AI developers and users of the future to navigate the issues they may face. 

The Royal Society has also encouraged public discussion about the implications of AI through its You and AI project, while our AI and social good programme considered how society as a whole can benefit from this technology.

One area likely to become more important as AI technologies enter widespread use is the environmental cost they incur. Currently, AI is heavily dependent upon large, centralised data centres that consume huge amounts of energy and water. As the use of AI expands, the demands on these resources could become unsustainable unless the technology is made more efficient.

Another area where ethics will be crucial is as the technology moves towards artificial general intelligence – where AI will match or even surpass human cognitive abilities. Some believe such a development could supercharge human progress, while others fear it could pose an existential threat. The Royal Society is already bringing together experts to consider what unexpected issues might arise in the future of AI.