In preparation for speaking to the Westminster Forum last week, I crystallized my current thinking on the legal and regulatory aspects of the UK’s ten-year National AI Strategy. In short, the Government’s consultations on privacy and IP and its proposed White Paper on the choice between sector-specific and “cross-cutting” regulation are important places to start. With the EU “AI Act” progressing fast, the UK would ideally move faster setting out its vision for AI law and regulation.
The Strategy was published in September 2021 and “governance and regulation” is one of three “pillars”, alongside investing in the AI ecosystem and supporting a transition to an AI-enabled economy. The Strategy aims to make “Britain a global AI superpower” through, in part, building the “most pro-innovation regulatory environment in the world”.
The UK has been very well placed to continue to be an AI superpower: investment in AI has been the third highest in the world (after the US and China); world-leading AI companies including DeepMind, Graphcore and BenevolentAI are based here; and we have leading centres of academic excellence brought together by the Alan Turing Institute. Some important legal and regulatory milestones have been achieved, such as the Centre for Data Ethics and Innovation (CDEI)‘s AI Assurance Roadmap, but we eagerly await announcements on key aspects of the UK’s direction, especially on access to talent, the balance between privacy and innovation, access to data (including text and data mining exceptions to copyright) and where the Government’s regulatory work will be focused.
Who will regulate AI in the UK?
Access to talent will be key to national success. The Strategy notes there were “over 110,000 UK job vacancies in 2020 for AI and Data Science roles” and that “Half of surveyed firms’ business plans had been impacted by a lack of suitable candidates with the appropriate AI knowledge and skills”. The UK’s competition for global talent has probably not been helped by the end to free movement of workers with the European Union and the Strategy promises new visa regimes and, for the longer term, investment in education and training.
The shortage for specialists applies across all AI-related activities – including to the legislature, regulators and lawyers. This may affect the UK’s ability to produce UK-specific legal and regulatory frameworks. The Strategy refers to the Government “working with The Alan Turing Institute and regulators to examine regulators’ existing AI capacities” and, hopefully, the results will be published soon. The availability of experts may affect the Government’s decisions on whether to pursue sector-led regulations and/or “cross-cutting” AI regulation.
How should AI be regulated in the UK?
The Strategy recognizes that the UK may need to move from its current focus on sector-led regulations towards “cross-cutting” AI regulation. It notes that sector specific regulators may be best placed to deal with the complexities of specific applications of AI and interlinked technology and may be able to develop and enforce rules more quickly, whereas cross-cutting regulation might avoid inconsistencies and unclear responsibilities as between regulators and would be more likely to address broad harms that AI may present.
The Strategy scheduled a White Paper by the Office for AI on the direction of regulation for early this year, but this has been delayed. At the Westminster Forum, Blake Bower (Director of Digital and Technology Policy, Department for Digital, Culture, Media and Sport (DCMS)) explained that the views collected by the UK Government ranged widely and reaching a suitable compromise position will take more time.
While cross-cutting regulation (such as something akin to the EU’s proposed “AI Act“) may be desirable, it may be more practical for the UK to continue to focus attention on sector-specific regulation.
- Progress is being made on immediate issues by sector-specific regulators in, for example, transport, finance and health care. But more sector-specific work is required in highly regulated areas, probably especially in transport and health care, to remove regulatory barriers to the development and deployment of AI-based products. Significant work is needed, for example, on how to prove the safety of AI-driven products such as autonomous vehicles and medical devices. These are areas of intense investment and research which may create significant social benefits directly and which are likely to have broader impacts on innovation and the economy in general.
- As discussed above, because of the general shortages of AI specialists, UK law makers and regulators may lack the bandwidth to pursue both sector-led regulations to “cross-cutting” AI regulation. Moreover, any UK “cross-cutting” AI regulation will need regulators to enforce it, making the competition for talent more acute for critical sector-specific regulation.
- The marginal benefit of the UK developing different cross-cutting rules to the EU is uncertain. As the Strategy recognizes, international activity may “overtake a national effort to build a consistent approach”. The EU’s AI Act is now well advanced and the “Brussels effect” may apply, as it has with GDPR: companies outside the EU may end up complying with the EU’s regulations as a de facto standard for global regulatory compliance. Moreover, as the Strategy notes, the UK is also involved with many transnational bodies addressing AI, including the Council of Europe (Europe’s human rights organisation), the Organisation for Economic Co-operation and Development (OECD), United Nations Educational, Scientific and Cultural Organization (UNESCO) and the Global Partnership on AI (GPAI).
- The Strategy suggests that AI raises unique challenges: “A system’s autonomy raises unique questions around liability, assurance, and fairness as well as risk and safety – and even ownership of creative content – in a way which is distinct to AI, and these questions increase with the relative complexity of the algorithm.” But many of the potential harms associated with AI are not new. Rather, AI often presents new factual scenarios that, while complex, could be addressed by applying existing laws on product liability, negligence, discrimination, fraud, etc. – both to remedy harms and to guide reasonable behaviours to mitigate risks. (Various chapters in The Law of Artificial Intelligence (Sweet & Maxwell) seek to predict how current UK laws will apply to the new factual scenarios.)
- More generally, AI is likely to accelerate social change. For example, social media recommendation algorithms are already blamed for a rise in political extremism. But the appropriate legal or regulatory responses (if any) to these developments are broad questions of social values, which are themselves developing, and require balancing competing legitimate interests such as freedom of speech and social unity. This applies to the effects of AI that are already beginning to be investigated, let alone unforeseen consequences of the growing use of AI. Cross-cutting regulation for AI in the UK may therefore be a long and evolving task as social norms emerge in the UK (and internationally) and it may prove more important to address automation, opaque decision making, social media, etc., in general rather than AI specifically.
- There is broad international consensus over the key general challenges presented by AI: transparency, robustness, bias, privacy and accountability. Research and investment is ongoing in these areas but there are, as yet, no clear technical solutions. Cross-cutting regulation is currently therefore likely to be limited to generalities rather than specific technical requirements – to question whether the use of AI is appropriate in the circumstances, to take reasonable steps to mitigate the risks (including human oversight) and to keep useful records. In other words, the sort of approach familiar from GDPR and good governance in general. This is the core approach the AI Act intends to require for “high-risk” AI systems and to encourage for all AI systems. The AI Act may be flawed – it may not cover downstream deployment of AI adequately, it may fail to articulate principles with which to identify prohibited and high-risk or to assess risk mitigation, and it may fail to address wider social impacts or give the public remedies (see Lillian Edwards’ opinion published by the Ada Lovelace Institute) – but it is difficult to see how the UK will usefully add its own national AI regulation while maintaining the Strategy’s stated aim for the UK to be both “the most trustworthy jurisdiction for the development and use of AI” and the most pro-innovation.
Key work on AI governance and regulation in the UK
The Strategy (which Blake Bower explained was deliberately ambitious in scope) identifies valuable legal and regulatory priorities for AI, including:
- determining the general direction for regulatory work;
- potential reforms to privacy requirements;
- potential reforms to IP law;
- promotion of a UK AI “assurance” ecosystem; and
- investigation into the viability of standards for AI.
As an IP lawyer, I am particularly keen to see acceleration of the work on three areas:
- promoting changes to international patent law to allow for the protection of inventions by AI (as I have said elsewhere (e.g. here) this may come to be needed in life sciences and other industries where R&D costs are massive and products can be readily copied and, because international harmonisation will be slow, this should be pursued now);
- expanding the copyright infringement exceptions for text and data mining exceptions to commercial activities, subject to an effective opt-out mechanism for rights holders, to make the UK competitive with the EU; and
- raising awareness among UK companies of the importance of trade secrets to protect AI and of the best practices for protecting them.
In short, I suggest the UK continues to focus on sector-specific regulation and accelerates measures to increase access to talent and to realign IP rights for AI.
About the author(s)
Gowling WLG is an international law firm operating across an array of different sectors and services. Our LoupedIn blog aims to give readers industry insight, technical knowledge and thoughtful observations on the legal landscape and beyond.