• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • About
  • Gowling WLG
  • Legal information
  • Privacy statement
  • Cookie Policy
  • Home
  • About
  • Posts
  • Blogs
    • B2022
    • The IP Blog
    • Public Law & Regulation
    • AI
    • The Unified Patents Court

LoupedIn

AI Assurance

October 9, 2021, Gowling WLG

AI Assurance

“AI assurance” describes methods to assess and potentially certify AI systems. These methods may aim to measure or more loosely evaluate an AI system’s performance, such as lack of bias, transparency, robustness, and cybersecurity. Or they may involve more holistic assessment of the potential impact of an AI system, such as on employees, users and society. AI systems are already widely in use, but we lack standard approaches and tests and we need technical solutions to fundamental challenges such as transparency. Moreover, while there is general agreement on the issues, there are real differences of opinion on whether regulation is appropriate and how to balance competing interests, such as whether privacy is more important than innovation.

The UK Government’s new National AI Strategy has called for publication in the next few months of the “assurance roadmap” by the UK Government’s Centre for Data Ethics and Innovation (CDEI). The roadmap is intended to set out the CDEI’s view of the current AI assurance ecosystem and how it should develop. As such, it is only a stepping off point for policy setting, regulation and standard setting: the CDEI merely hopes “to help industry, regulators, standards bodies, and government, think through their own roles in this emerging ecosystem”.

The CDEI has published three blogs on AI assurance, calling for “an effective AI assurance ecosystem” to allow “actors, including regulators, developers, executives, and frontline users, to check that [AI] tools are functioning as expected, in a way that is compliant with standards (including regulation), and to demonstrate this to others”. AI assurance should provide “trustworthy information about how a product is performing on issues such as fairness, safety or reliability, and, where appropriate, ensuring compliance with relevant standards.”

AI assurance as a service

Because the actors involved with AI may lack information and specialist knowledge, the CDEI predicts the development of “assurance as a service” with products and services such as: “process and technical standards; repeatable audits; certification schemes; advisory and training services”. In this way, AI assurance will both help to unlock the economic and social benefits of AI and become a significant economic activity in its own right.

Coordination and regulation

The CDEI suggests coordination is needed to avoid fragmentation of assurance techniques and approaches. It suggests assessing assurance models from various fields, including external audits, certification by an independent body, accreditation by a regulator or an accreditation body for those performing assurance, and impact assessments by organisations using AI or external advisors. It intends to produce a review of the current ecosystem and recommendations to help industry, regulators, standards bodies, and government consider their roles.

The CDEI says its AI assurance review is complementary to work on data assurance by the Open Data Institute (a non-government non-profit company). 

Competing interests and trade-offs

The CDEI’s second blog identifies a need to build a consensus of how to align the roles, responsibilities and interests of different assurance users: government policymakers, regulators, executives using or considering whether to develop, buy or use AI systems, developers, frontline users and individuals affected by AI systems. This will also require coordination by regulators where AI systems “cut across the purview of multiple sector-based regulators, resulting in ambiguity over which regulator has ultimate responsibility”.

In particular, the CDEI gives two examples of tensions that will require decisions by governments and regulators:

  • trade-offs between risk minimisation and encouraging innovation; and
  • who should be ultimately accountable for an AI (primarily as between developer and an executive buying or using the AI system).

Compliance or risk assurance

The CDEI’s third blog contrasts two forms of assurance:

  • For compliance assurance a third party assesses whether an AI system or the organisation using it complies with a given standard. Compliance may be tested by some formal (e.g. mathematical) verification or through a business and compliance audit, and may result it some form a certification (like a “kitemark”). The tester is typically accredited to ensure their competence.
  • For risk assurance an organisation using AI and/or a third party ask an open-ended question about how a system works” to identify, assess and measure potential risks. Examples include impact assessments (prospective), impact evaluations (retrospective), bias audits and ongoing testing (ideally including “redteaming” – using a devil’s advocate to avoid “groupthink”).

The CDEI argues these are “mutually reinforcing”. Compliance assurance is best suited to addressing basic quality and safety issues, whereas risk assurance is suitable for nuanced, context-specific assessments.

The role of standards

The CDEI’s third blog suggests that standards can support both compliance and risk assurance. Compliance assurance requires standards covering: performance and safety assessment; what information should be measured for audits and how it should be measured; certification; and accreditation. Risk assurance can be supported by standards setting common language and norms. The CDEI gives the example of the General Data Protection Regulation (GDPR) having set a common language for managing privacy risks and conducting data privacy impact assessments (DPIAs).

Comment

The UK’s National AI strategy aims to produce the “most pro-innovation regulatory environment in the world”. The detail is yet to come and the CDEI’s assurance roadmap, expected in the coming months, is only an initial step – it will identify the proper tools for regulation but not the substance of regulation itself. Meanwhile, the EU has published its draft AI Regulations. It remains to be seen whether the UK can valuably depart from regulatory standards set by the EU if AI products and services in practice need to conform with EU requirements. A more nuanced issue may be whether the UK can encourage an influx of AI research and development through taking different approaches to regulation, intellectual property, visas and other levers.

About the author(s)

Gowling WLG
See recent postsBlog biography

Gowling WLG is an international law firm operating across an array of different sectors and services. Our LoupedIn blog aims to give readers industry insight, technical knowledge and thoughtful observations on the legal landscape and beyond.

  • Gowling WLG
    https://loupedin.blog/author/gowlingwlg/
    Ensuring the emerging geography of AI doesn’t become a TRAIN-wreck
  • Gowling WLG
    https://loupedin.blog/author/gowlingwlg/
    Celebrating Black History Month: Stories from our community
  • Gowling WLG
    https://loupedin.blog/author/gowlingwlg/
    No revocation carve-out and related actions – 2nd UPC decision on the merits
  • Gowling WLG
    https://loupedin.blog/author/gowlingwlg/
    The first UPC decision on the merits is here

Gowling WLG

Gowling WLG is an international law firm operating across an array of different sectors and services. Our LoupedIn blog aims to give readers industry insight, technical knowledge and thoughtful observations on the legal landscape and beyond.

Filed Under: AI, Corporate Responsibility, Innovation, Intellectual Property, Opinion, Public Law & Regulation Tagged With: Artificial Intelligence (AI), Brexit, Public Law & Regulation, Tech

Views expressed in this blog do not necessarily reflect those of Gowling WLG.

NOT LEGAL ADVICE. Information made available on this website in any form is for information purposes only. It is not, and should not be taken as, legal advice. You should not rely on, or take or fail to take any action based upon this information. Never disregard professional legal advice or delay in seeking legal advice because of something you have read on this website. Gowling WLG professionals will be pleased to discuss resolutions to specific legal concerns you may have.

Primary Sidebar

Recent Posts

  • Sole(ly) aesthetic? The Birkenstock Sandal goes to the Federal Court of Justice
  • UK Litigation Funding: reform or retain?
  • Arbitration Act 2025 receives Royal Assent

Tags

Artificial Intelligence (AI) (62) Autonomous vehicles (11) b2022 (19) Birmingham 2022 (8) Birmingham 2022 Commonwealth Games (15) Brexit (23) Climate change (16) Collective defined contribution (6) COP26 (11) Copyright (11) COVID-19 (23) Cyber security (7) Data protection (8) Defined contribution (7) Dispute Resolution (14) Employment (14) employment law (11) Environment (18) Environmental Societal Governance (9) ESG (50) ESG and pensions (11) General Election 2024 and pensions (8) Intellectual Property (86) IP (10) Life sciences (7) litigation funding (8) net zero (6) Patents (40) Pensions (53) Pension Schemes Act 2021 (11) Pensions dashboards (7) Pensions in 2022 (10) Pensions law (43) Procurement (7) Public Law & Regulation (39) Real Estate (27) Retail (8) sustainability (21) Tech (58) The Week In Pensions (11) Trademarks (16) UK (15) unified patents court (9) UPC (39) Week in HR (8)

Categories

Archives

Gowling WLG is an international law firm comprising the members of Gowling WLG International Limited, an English Company Limited by Guarantee, and their respective affiliates. Each member and affiliate is an autonomous and independent entity. Gowling WLG International Limited promotes, facilitates and co-ordinates the activities of its members but does not itself provide services to clients. Our structure is explained in more detail on our Legal Information page.

Footer

  • Home
  • About
  • Gowling WLG
  • Legal information
  • Privacy statement
  • Cookie Policy

© 2025 Gowling WLG