Site icon LoupedIn

AI Assurance

AI Assurance

AI Assurance

AI assurance” describes methods to assess and potentially certify AI systems. These methods may aim to measure or more loosely evaluate an AI system’s performance, such as lack of bias, transparency, robustness, and cybersecurity. Or they may involve more holistic assessment of the potential impact of an AI system, such as on employees, users and society. AI systems are already widely in use, but we lack standard approaches and tests and we need technical solutions to fundamental challenges such as transparency. Moreover, while there is general agreement on the issues, there are real differences of opinion on whether regulation is appropriate and how to balance competing interests, such as whether privacy is more important than innovation.

The UK Government’s new National AI Strategy has called for publication in the next few months of the “assurance roadmap” by the UK Government’s Centre for Data Ethics and Innovation (CDEI). The roadmap is intended to set out the CDEI’s view of the current AI assurance ecosystem and how it should develop. As such, it is only a stepping off point for policy setting, regulation and standard setting: the CDEI merely hopes “to help industry, regulators, standards bodies, and government, think through their own roles in this emerging ecosystem”.

The CDEI has published three blogs on AI assurance, calling for “an effective AI assurance ecosystem” to allow “actors, including regulators, developers, executives, and frontline users, to check that [AI] tools are functioning as expected, in a way that is compliant with standards (including regulation), and to demonstrate this to others”. AI assurance should provide “trustworthy information about how a product is performing on issues such as fairness, safety or reliability, and, where appropriate, ensuring compliance with relevant standards.”

AI assurance as a service

Because the actors involved with AI may lack information and specialist knowledge, the CDEI predicts the development of “assurance as a service” with products and services such as: “process and technical standards; repeatable audits; certification schemes; advisory and training services”. In this way, AI assurance will both help to unlock the economic and social benefits of AI and become a significant economic activity in its own right.

Coordination and regulation

The CDEI suggests coordination is needed to avoid fragmentation of assurance techniques and approaches. It suggests assessing assurance models from various fields, including external audits, certification by an independent body, accreditation by a regulator or an accreditation body for those performing assurance, and impact assessments by organisations using AI or external advisors. It intends to produce a review of the current ecosystem and recommendations to help industry, regulators, standards bodies, and government consider their roles.

The CDEI says its AI assurance review is complementary to work on data assurance by the Open Data Institute (a non-government non-profit company). 

Competing interests and trade-offs

The CDEI’s second blog identifies a need to build a consensus of how to align the roles, responsibilities and interests of different assurance users: government policymakers, regulators, executives using or considering whether to develop, buy or use AI systems, developers, frontline users and individuals affected by AI systems. This will also require coordination by regulators where AI systems “cut across the purview of multiple sector-based regulators, resulting in ambiguity over which regulator has ultimate responsibility”.

In particular, the CDEI gives two examples of tensions that will require decisions by governments and regulators:

Compliance or risk assurance

The CDEI’s third blog contrasts two forms of assurance:

The CDEI argues these are “mutually reinforcing”. Compliance assurance is best suited to addressing basic quality and safety issues, whereas risk assurance is suitable for nuanced, context-specific assessments.

The role of standards

The CDEI’s third blog suggests that standards can support both compliance and risk assurance. Compliance assurance requires standards covering: performance and safety assessment; what information should be measured for audits and how it should be measured; certification; and accreditation. Risk assurance can be supported by standards setting common language and norms. The CDEI gives the example of the General Data Protection Regulation (GDPR) having set a common language for managing privacy risks and conducting data privacy impact assessments (DPIAs).

Comment

The UK’s National AI strategy aims to produce the “most pro-innovation regulatory environment in the world”. The detail is yet to come and the CDEI’s assurance roadmap, expected in the coming months, is only an initial step – it will identify the proper tools for regulation but not the substance of regulation itself. Meanwhile, the EU has published its draft AI Regulations. It remains to be seen whether the UK can valuably depart from regulatory standards set by the EU if AI products and services in practice need to conform with EU requirements. A more nuanced issue may be whether the UK can encourage an influx of AI research and development through taking different approaches to regulation, intellectual property, visas and other levers.

Matt Hervey is Head of Artificial Intelligence (UK) at Gowling WLG (UK) and advises on Artificial Intelligence (AI) and IP across all sectors, including automotive, life sciences, finance and retail. Find out more about Matt Hervey on the Gowling WLG website. He is co-editor of The Law of Artificial Intelligence (Sweet & Maxwell).

Exit mobile version