The UK’s Information Commissioner’s Office (“ICO”) has issued the final version of its guidance on artificial intelligence entitled “Explaining decisions made with artificial intelligence (AI)”, drafted in collaboration with The Alan Turing Institute, the UK’s national institute for data science and artificial intelligence. The guidance aims to help organisations explain decisions made by AI systems to the individuals affected. Explaining AI is one of the main challenges for organisations considering the use of AI tools, as has been stressed in the Ethics Guidelines for Trustworthy AI guidance prepared by the High-Level Expert Group on Artificial Intelligence (“AI HLEG”) as part of the European Commission’s AI strategy).
The guidance covers a wide range of data protection-related topics including the basic legal framework that applies to AI, practical steps to take to explain AI-assisted decisions and the types of policies, procedures and documentation that organisations can put in place.
This guidance is not a statutory code of practice under the Data Protection Act 2018 and operates more as a set of good practice measures for explaining AI-assisted decisions. However, due to the importance of ensuring compliance with the transparency principle under General Data Protection Regulation (GDPR) when processing personal data through AI systems, we recommend this document is considered by every organisation testing or otherwise using AI tools. It is part of a wider range of resources that the ICO is putting in place in relation to AI, such as the Draft Guidance on the AI auditing framework, which was published for consultation last May, and the Big Data, AI and Machine Learning report updated in 2017.
The new guidance is formed of three parts. We have outlined below the key points of each part.
- Part 1 (The Basics of Explaining AI) – This part is mainly addressed to data protection officers (“DPOs”) and compliance teams. It outlines the legal framework behind giving notice to individuals with explanations about the use of AI and AI-assisted decisions.
- Part 2 (Explaining AI in Practice) – This part is mainly addressed to technical teams. It provides organisations with a set of tasks (six in total) to assist in their efforts to design explainable AI systems and deliver appropriate explanations to the individuals according to their needs and skills.
- Part 3 (What Explaining AI means for Your Organisation) – This part is mainly addressed to senior management but DPOs and compliance teams may also find it useful. It explores the various roles, policies, procedures and documentation that organisations can put in place to ensure that they have the appropriate internal structure to provide meaningful explanations of AI decisions to affected individuals.