The House of Lords, the UK’s upper legislative chamber, has published a further report on Artificial Intelligence, focussing on ethical AI, safeguards over the use of data, support for jobs and new skills, central training for sector-specific regulators, a champion for AI in public service and immigration rules to attract AI specialists to the UK.
The report, AI in the UK: No Room for Complacency, puts strong emphasis on enabling public understanding of the use of data and safeguards such as “data trusts”. It recognises “ethical AI” as the only “sustainable way forward” and calls on the UK’s Centre for Data Ethics and Innovation to develop two ethical frameworks: “one for the ethical development of AI, including issues of prejudice and bias, and the other for the ethical use of AI by policymakers and businesses”. It recommends support for jobs and retraining focussed on industries most at risk. It calls on the UK’s Information Commissioner’s Office to coordinate the development of training for sector regulators to enable them to “identify gaps in regulation, and to learn about AI and apply it to their sectors”. It calls for a Cabinet Committee within Government to develop a five-year plan for AI and the appointment of a Chief Data Officer to promote the use and understanding of AI in the public service. It calls for changes to immigration rules to promote the study, research and development of AI in the UK.
The full recommendations are set out below with some explanations and useful links added.
Public understanding and data
- “Artificial intelligence is a complicated and emotive subject. The increase in reliance on technology caused by the COVID-19 pandemic, has highlighted the opportunities and risks associated with the use of technology, and in particular, data. It is no longer enough to expect the general public to learn about both AI and how their data is used passively. Active steps must be taken to explain to the general public the use of their personal data by AI. Greater public understanding is essential for the wider adoption of AI, and also to enable challenge to any organisation using to deploy AI in an ethically unsound manner.”
- “The Government must lead the way on actively explaining how data is being used. Being passive in this regard is no longer an option. The general public are more sophisticated than they are given credit by the Government in their understanding of where data can and should be used and shared, and where it should not. The development of policy to safeguard the use of data, such as data trusts, must pick up pace, otherwise it risks being left behind by technological developments. This work should be reflected in the National Data Strategy.”
- “The AI Council [a non-statutory expert committee of independent members set up to provide advice to the Government and high-level leadership of the AI ecosystem], as part of its brief from Government to focus on exploring how to develop and deploy safe, fair, legal and ethical data-sharing frameworks, must make sure it is informing such policy development in a timely manner, and the Government must make sure it is listening to the Council’s advice. The AI Council should take into account the importance of public trust in AI systems, and ensure that developers are developing systems in a trustworthy manner. Furthermore, the Government needs to build upon the recommendations of the Hall-Pesenti Review, as well as the work done by the Open Data Institute, in conjunction with the Office for AI and Innovate UK, to develop, and deploy data trusts as envisaged in the Hall-Pesenti Review.”
- “Since the Committee’s report was published, the conversation around ethics and AI has evolved. There is a clear consensus that ethical AI is the only sustainable way forward. Now is the time to move that conversation from what are the ethics, to how to instil them in the development and deployment of AI systems.”
- “The Government must lead the way on the operationalisation of ethical AI. There is a clear role for the CDEI [Centre for Data Ethics and Innovation] in leading those conversations both nationally and internationally. The CDEI, and the Government with them, should not be afraid to challenge the unethical use of AI by other governments or organisations.”
- “The CDEI should establish and publish national standards for the ethical development and deployment of AI. National standards will provide an ingrained approach to ethical AI, and ensure consistency and clarity on the practical standards expected for the companies developing AI, the businesses applying AI, and the consumers using AI. These standards should consist of two frameworks, one for the ethical development of AI, including issues of prejudice and bias, and the other for the ethical use of AI by policymakers and businesses. These two frameworks should reflect the different risks and considerations at each stage of AI use.”
- “It is imperative that the Government takes steps to ensure that the digital skills of the UK are brought up to speed, as well as to ensure that people have the opportunity to reskill and retrain to be able to adapt to the evolving labour market caused by AI.”
National Retraining Scheme
- “There is no clear sense of the impact AI will have on jobs. It is however clear that there will be a change, and that complacency risks people finding themselves underequipped to participate in the employment market of the near future.”
- “As and when the COVID-19 pandemic recedes and the Government has to address the economic impact of it, the nature of work will change and there will be a need for different jobs. This will be complemented by opportunities for AI, and the Government and industry must be ready to ensure that retraining opportunities take account of this. In particular the AI Council should identify the industries most at risk, and the skills gaps in those industries. A specific training scheme should be designed to support people to work alongside AI and automation, and to be able to maximise its potential.”
Public Trust and regulation
- “The challenges posed by the development and deployment of AI cannot currently be tackled by cross-cutting regulation. The understanding by users and policymakers needs to be developed through a better understanding of risk and how it can be assessed and mitigated. Sector-specific regulators are better placed to identify gaps in regulation, and to learn about AI and apply it to their sectors. The CDEI and Office for AI can play a cross-cutting role, along with the ICO [Information Commissioner’s Office], to provide that understanding of risk and the necessary training and upskilling for sector specific regulators.”
- “The ICO must develop a training course for use by regulators to ensure that their staff have a grounding in the ethical and appropriate use of public data and AI systems, and its opportunities and risks. It will be essential for sector specific regulators to be in a position to evaluate those risks, to assess ethical compliance, and to advise their sectors accordingly. Such training should be prepared with input from the CDEI, Office for AI and Alan Turing Institute [the UK’s the national institute for data science and artificial intelligence]. The uptake by regulators should be monitored by the Office for AI. The training should be prepared and rolled out by July 2021.”
- “We commend the Government for its work to date in establishing a considered range of bodies to advise it on AI over the long term.”
- “However we caution against complacency. There must be more and better coordination, and it must start at the top. A Cabinet Committee must be established whose terms of reference include the strategic direction of Government AI policy and the use of data and technology by national and local government.”
- “The first task of the Committee should be to commission and approve a five year strategy for AI. Such a strategy should include a reflection on whether the existing bodies and their remits are sufficient, and the work required to prepare society to take advantage of AI rather than be taken advantage of by it.”
A Chief Data Officer
- “The Government must take immediate steps to appoint a Chief Data Officer, whose responsibilities should include acting as a champion for the opportunities presented by AI in the public service, and ensuring that understanding and use of AI, and the safe and principled use of public data, are embedded across the public service.”
Autonomy Development Centre
- “We believe that the work of the Autonomy Development Centre will be inhibited by the failure to align the UK’s definition of autonomous weapons with international partners: doing so must be a first priority for the Centre once established.”
The United Kingdom as a world leader
- “The UK remains an attractive place to learn, develop, and deploy AI. It has a strong legal system, coupled with world-leading academic institutions, and industry ready and willing to take advantage of the opportunities presented by AI. We also welcome the development of Global Partnership on Artificial Intelligence and the UK’s role as a founder member.”
- “It will however be a cause for great concern if the UK is, or is seen to be, less welcoming to top researchers, and less supportive of them. The Government must ensure that the UK offers a welcoming environment for students and researchers, and helps businesses to maintain a presence here. Changes to the immigration rules must promote rather than obstruct the study, research and development of AI.”
About the author(s)
Matt Hervey is Head of Artificial Intelligence (UK) at Gowling WLG (UK) and advises on Artificial Intelligence (AI) and IP across all sectors, including automotive, life sciences, finance and retail. Find out more about Matt Hervey on the Gowling WLG website. He is co-editor of The Law of Artificial Intelligence (Sweet & Maxwell).