Site icon LoupedIn

The UK’s National AI Strategy: governance, regulation and law

UK AI Regulation and Law

In preparation for speaking to the Westminster Forum last week, I crystallized my current thinking on the legal and regulatory aspects of the UK’s ten-year National AI Strategy. In short, the Government’s consultations on privacy and IP and its proposed White Paper on the choice between sector-specific and “cross-cutting” regulation are important places to start. With the EU “AI Act” progressing fast, the UK would ideally move faster setting out its vision for AI law and regulation.

The Strategy was published in September 2021 and “governance and regulation” is one of three “pillars”, alongside investing in the AI ecosystem and supporting a transition to an AI-enabled economy. The Strategy aims to make “Britain a global AI superpower” through, in part, building the “most pro-innovation regulatory environment in the world”.

The UK has been very well placed to continue to be an AI superpower: investment in AI has been the third highest in the world (after the US and China); world-leading AI companies including DeepMind, Graphcore and BenevolentAI are based here; and we have leading centres of academic excellence brought together by the Alan Turing Institute. Some important legal and regulatory milestones have been achieved, such as the Centre for Data Ethics and Innovation (CDEI)‘s AI Assurance Roadmap, but we eagerly await announcements on key aspects of the UK’s direction, especially on access to talent, the balance between privacy and innovation, access to data (including text and data mining exceptions to copyright) and where the Government’s regulatory work will be focused.

Who will regulate AI in the UK?

Access to talent will be key to national success. The Strategy notes there were “over 110,000 UK job vacancies in 2020 for AI and Data Science roles” and that “Half of surveyed firms’ business plans had been impacted by a lack of suitable candidates with the appropriate AI knowledge and skills”. The UK’s competition for global talent has probably not been helped by the end to free movement of workers with the European Union and the Strategy promises new visa regimes and, for the longer term, investment in education and training.

The shortage for specialists applies across all AI-related activities – including to the legislature, regulators and lawyers. This may affect the UK’s ability to produce UK-specific legal and regulatory frameworks. The Strategy refers to the Government “working with The Alan Turing Institute and regulators to examine regulators’ existing AI capacities” and, hopefully, the results will be published soon. The availability of experts may affect the Government’s decisions on whether to pursue sector-led regulations and/or “cross-cutting” AI regulation.

How should AI be regulated in the UK?

The Strategy recognizes that the UK may need to move from its current focus on sector-led regulations towards “cross-cutting” AI regulation. It notes that sector specific regulators may be best placed to deal with the complexities of specific applications of AI and interlinked technology and may be able to develop and enforce rules more quickly, whereas cross-cutting regulation might avoid inconsistencies and unclear responsibilities as between regulators and would be more likely to address broad harms that AI may present.

The Strategy scheduled a White Paper by the Office for AI on the direction of regulation for early this year, but this has been delayed. At the Westminster Forum, Blake Bower (Director of Digital and Technology Policy, Department for Digital, Culture, Media and Sport (DCMS)) explained that the views collected by the UK Government ranged widely and reaching a suitable compromise position will take more time.

While cross-cutting regulation (such as something akin to the EU’s proposed “AI Act“) may be desirable, it may be more practical for the UK to continue to focus attention on sector-specific regulation.

Key work on AI governance and regulation in the UK

The Strategy (which Blake Bower explained was deliberately ambitious in scope) identifies valuable legal and regulatory priorities for AI, including:

As an IP lawyer, I am particularly keen to see acceleration of the work on three areas:

In short, I suggest the UK continues to focus on sector-specific regulation and accelerates measures to increase access to talent and to realign IP rights for AI.

Matt Hervey is Head of Artificial Intelligence (UK) at Gowling WLG (UK) and advises on Artificial Intelligence (AI) and IP across all sectors, including automotive, life sciences, finance and retail. Find out more about Matt Hervey on the Gowling WLG website. He is co-editor of The Law of Artificial Intelligence (Sweet & Maxwell).

Exit mobile version