This week the Information Commissioner’s Office and The Alan Turing Institute have launched a consultation on their co-badged ExplAIn guidance.

This week the Information Commissioner’s Office and The Alan Turing Institute have launched a consultation on their co-badged ExplAIn guidance. This guidance aims to give organisations practical advice to help explain the processes, services and decisions delivered by artificial intelligence, to the individuals affected by them. Carl Wiper, ICO Group Manager, explains more.

More and more organisations, across all sectors, are using artificial intelligence (AI) to make, or to support, decisions about individuals. Organisations developing these AI systems know the positive impact they can have on society. But to make them work they need to have the public’s trust. Transparency and accountability are key in achieving this.

A core issue is the difficulty in explaining how and why an AI system has produced an output or prediction. We welcome the Royal Society’s paper on AI explainability which helpfully sets out the key issues with this. Where personal data is being used, these outputs or predictions can have a significant impact on people. This is why it is essential the decisions are explainable.

Independent experts and the Government have recognised this and called for guidance to help organisations work out how to explain the AI decisions. The Alan Turing Institute (The Turing) and the Information Commissioner’s Office (ICO) were tasked with producing this guidance.

Following both public and industry engagement we have now published a draft of this guidance, which we are currently consulting on.

The guidance lays out four key principles that organisations must consider when developing AI systems. These are:

  1. Be transparent
  2. Be accountable
  3. Consider context
  4. Reflect on impacts

Rooted within the General Data Protection Regulation (GDPR) and other relevant laws, these principles will help organisations govern their use of AI systems and become more accountable.

Within an AI explanation, the guidance advises organisations to look at:

  • Rationale: the reasons that led to a decision, delivered in an accessible way
  • Responsibility: who is involved in the development and management of an AI system, and who to contact for a human review of a decision
  • Data: what data has been used in a particular decision, and what data has been used to train and test the AI model
  • Fairness: steps taken to ensure that AI decisions are generally unbiased and fair, and whether or not an individual has been treated fairly
  • Safety and performance: steps taken to maximise the accuracy, reliability, security and robustness of the decisions the AI system helps to make
  • Impact: the effect that the AI system has on an individual, and on society

When delivering the explanation to the individual affected, there are various contextual factors that will inform what organisations tell them first, and what information they make available separately (layering explanations to avoid information overload):

  • Domain: the setting or sector in which the AI system is deployed to help make decisions about people. What people want to know in the health sector will be very different to the explanation they will want in the criminal justice domain
  • Impact: the effect an AI-enabled decision can have on an individual. Varying levels of severity and different types of impact can change what explanations people will find useful, and the purpose the explanation serves
  • Data: the data used to train and test an AI model, and the input data used for a particular decision. The type of data used can influence an individual’s willingness to accept or contest an AI-enabled decision, and the actions they take as a result of it
  • Urgency: the importance of receiving, or acting upon, the outcome of a decision within a short timeframe
  • Audience: the individuals the explanation is being given to will influence what type(s) of explanation will be useful

Not only can explaining decisions made using AI help organisations build trust with their customers, it can also improve their internal governance by having an informed workforce that can maintain oversight of what these systems do and why.

The benefits for society of explaining AI decisions are also significant, as a better-informed public can help assuage disproportionate concerns about AI and support a more constructive and mutually beneficial debate for business and society. It can also lead to better outcomes for society, as providing explanations for the outputs of AI systems can help to identify where bias or discrimination can occur, which can then be mitigated.

We want to ensure the guidance is practical in the real world, so organisations can easily apply the principles when developing AI systems. This is why we are requesting feedback from those considering or developing the use of AI. Whether you’re a data scientist, app developer, business owner, CEO or data protection practitioner, we want to hear your thoughts.

The consultation will be open until 24 January 2020 and the guidance will be published in full later in 2020. However, AI is a complex and fast-developing area so we will continue working on it beyond then, to ensure the guidance remains relevant.

If you’d like to find out more about explainable AI and why it matters, the Royal Society has published a policy briefing that summarises these policy debates.

 

Authors

  • Carl Wiper

    Carl Wiper

    Carl Wiper has worked at the Information Commissioner’s Office since 2010. He is a Senior Policy Officer in the Policy Delivery department. He is currently working on a number of policy issues in relation to data protection and freedom of information, including big data, profiling and outsourcing.