On 17 July 2020 the High-Level Expert Group on Artificial Intelligence (AI HLEG), the group of experts appointed by the European Commission to support the implementation of the European Strategy on Artificial Intelligence, presented the final Assessment List for Trustworthy AI (the List).
The concept of “trustworthy” AI was introduced by the AI HLEG in the Ethics Guidelines for Trustworthy Artificial Intelligence (Ethics Guidelines) and is based on seven key requirements:
- human agency and oversight;
- technical robustness and safety;
- privacy and data governance;
- transparency;
- diversity, non-discrimination and fairness;
- environmental and societal well-being; and
- accountability.
The List builds on these with guidance incorporating feedback received during a piloting phased in the second half of 2019. Importantly, the List has also been developed into a prototype web-based tool to guide developers and deployers of AI through the checklist. The List includes focused questions for organisations relating to each of the seven key requirements, with introductory sections offering guidance as to the importance and purpose of each one. While the List does not provide solutions to the questions posed, it offers a framework within which to consider how identified risks might be mitigated.
Prior to applying the List, a fundamental rights impact assessment is recommended to consider how the AI system might affect the rights granted under the Charter and the European Convention on Human Rights (ECHR), its protocols and the European Social Charter. Example questions provided include:
- whether the AI system might negatively discriminate against people on the basis of, for example, sex, race, colour, ethnic or social origin;
- whether the system protects personal data relating to individuals in line with GDPR;
- whether the system respects the rights of the child; and
- whether it respects other freedoms, such as the freedom of expression.
The List does not put in place mandatory requirements; it is intended to assist organisations in their understanding of trustworthy AI and in identifying risk areas specific to the sector or industry in which they operate.
The European Commission is currently in the process of developing regulatory proposals relating (in part) to trustworthy AI, following the completion of a public consultation process in response to its White Paper on Artificial Intelligence last month. These regulations may include aspects of the List and fundamental rights impact assessments. In the meantime, applying the recommendations of the AI HLEG should help companies align their use of AI with the regulatory framework being developed.
About the author(s)
Gowling WLG is an international law firm operating across an array of different sectors and services. Our LoupedIn blog aims to give readers industry insight, technical knowledge and thoughtful observations on the legal landscape and beyond.