Explainable AI
This briefing summarises current discussions about the development of explainable AI, setting out some of the reasons why such AI methods might be desirable, the different approaches available, and some of the considerations involved in creating these systems.
Links to external sources may no longer work as intended. The content may not represent the latest thinking in this area or the Society’s current position on the topic.
This briefing summarises current discussions about the development of explainable AI, setting out some of the reasons why such AI methods might be desirable, the different approaches available, and some of the considerations involved in creating these systems.
What is explainable AI?
Artificial intelligence (AI) is a broad term. It describes a range of tools and methods that allow computer systems to carry out complex tasks or act in challenging environments. Recent years have seen significant advances in AI technologies, and many people now interact with AI-supported systems on a daily basis. These AI tools are able to produce highly accurate results, but some are also highly complex. This complexity has led researchers and policymakers to question – is it possible to understand how AI works, or is AI a ‘black box’?
There are many advantages to understanding how or why an AI-enabled system has led to a specific output. Explainability can help developers ensure that the system is working as expected, it might be necessary to meet regulatory standards, or it might be important in allowing those affected by an decision to challenge or change that outcome. There can also be trade-offs or challenges involved in creating explainable AI: there may be a need to manage concerns about privacy, to consider access to intellectual property, or to put in place checks to ensure the explanations provided are reliable.
Different AI methods are affected by concerns about explainability in different ways, and different methods or tools can provide different types of explanation. There are examples of AI systems that are not easily explainable, but can be deployed without concern; there are also cases where the use of explainable AI methods is necessary and also needs to be supported by wider systems to ensure accountability across the full analytics pipeline – from data collection to decision. Those developing and deploying AI need to take into account the needs of different groups interacting with the system, considering what types of explanation might be useful and for what purpose.