There are a number of ethical concerns around the development and use of AI. At a high level, these break down to concerns about how these models are trained, how they are developed and deployed, and the highest-risk harms that powerful AI could be capable of, sometimes referred to as ‘existential risk.’
Most training concerns relate to the data that goes into AI algorithms and the transparency of the process. This includes considering what biases might be embedded in the data AI models are trained on; whether this data contains sensitive and personal information; and if training models make use of copyrighted material.
In terms of development, it is important to make AI systems explainable and accountable, such that humans can understand how a model reached a particular conclusion. This can help to identify errors and will be particularly important as AI is applied in critical systems such as healthcare, law, and government. On deployment, a key debate is who bears responsibility when an AI does harm – should it, for instance, be the engineers who developed the model, the developers who built an app on top of it, or the end-users who applied the system? Such questions of liability are contentious and unresolved debates in AI, and of particular relevance when it comes to designing regulation.
Finally there are the highest-level societal risks from AI, often called ‘existential risk.’ These range from concerns about AI replacing human jobs to it being used to develop dangerous weapons or tools of war.
- Bias and objectivity – AI algorithms can reflect the biases in the data they are trained upon and of the people who build them. Exploring ways of countering that, such as by improving diversity in training datasets and the AI workforce, is necessary.
- Data use and privacy – The data used by AI algorithms can include personal information, so it needs to be used in a way that doesn’t infringe people’s rights to privacy. As AI strays into creative areas, the copyright and ownership of the material it ingests and produces is also contentious.
- Transparency – Making AI explainable and accountable is important so that it is possible to understand how an algorithm or system reached a particular conclusion. This can help to identify errors, and will likely to be particularly important as AI is used in healthcare, the legal system and on our roads.
- Harm and liability – If an AI system does harm someone, there are questions about who should accept responsibility. Should it be the engineers who built the AI model, the people who trained it or those who are using it? In some cases this can include tricky moral dilemmas, such as an autonomous vehicle that has to choose between actions that might harm those inside the vehicle or other road users around it.
- Societal harm – New uses of AI could replace human jobs, which may require measures to support and retrain those affected, or steps to protect human roles. There is also the risk of exploitation as low-paid data workers are used to train AI models – a vital task in developing these systems.
- Warfare and weapons – The development of autonomous weapons powered by AI raises wide-ranging ethical questions about removing humans from life-and-death decisions. There are also issues around the potential for misuse and regulation of such weapons.