The World Economic Forum has published an overview of its project on artificial intelligence (AI) Procurement: AI Procurement in a Box. The project addresses the procurement of AI for government administration. It notes the challenges to government procurement officials facing a technology whose benefits and risks are “impossible to predict” and which requires cross-sector, interdisciplinary, multi-stakeholder input.
The “Toolbox” aims to offer “a set of complementary tools to demonstrate the emerging global consensus on the responsible deployment of AI technologies”. These include procedures to record relevant information and explanations to preserve due process and predictability and to safeguard fairness and impartiality. Particular concerns include the misuse of sensitive data and the risk that automation by AI can amplify and propagate bias.
More generally, the overview suggests governments must drive the ethical development of AI. The report warns that a failure to do so “can limit accountability, undermine social values, entrench the market power of large businesses, decrease public trust and ultimately slow digital transformation in the public sector.”
The UK’s Guidelines for AI Procurement were published at the same time and as part of the World Economic Forum’s project. Developed in collaboration with the World Economic Forum Centre, Government Digital Service, Government Commercial Function and Crown Commercial Service, the UK Guidelines are aimed at central government departments that are considering the suitability of AI technology to improve existing services or as part of future service transformation. These include key end-to-end procurement considerations from preparation and planning procurement activity, selection and evaluation of tenders to implementation and contract management.
Comment
For the UK, the AI in Procurement Toolkit and the Guidelines for AI Procurement complement but do not replace the current procurement guidance or regulations. Specifically, these should be considered alongside existing policy in relation to the use of and procuring technology and digital services and the Outsourcing Playbook, which sets out guidance for central government bodies on service delivery, including outsourcing, insourcing, mixed economy sourcing and contracting.
The need for an ethical approach to AI – and the need to build procedures to safeguard ethical standards – is equally important for private companies. As reflected by the Toolbox, achieving ethical standards requires practical procedures, such as which stakeholders to involve, questions to ask and answer, evidence to collect, checks to run (and when), and default alternatives to AI to use in case of failure.
This investment should be a win-win. Law and regulation always lags behind technology. This is particularly acute for AI because of the speed of change, the reliance of most AI development on data, including personal data, and the unusual threats posed to privacy, fairness and autonomy for individuals and society. Companies need to minimise risk by predicting the direction of future law and regulation – pursuing ethical AI is most likely to align products and services with later laws and regulation. It is also the best approach to ensuring public good, protecting reputation, avoiding harm and maintaining public trust in AI in general.
About the author(s)
Gowling WLG is an international law firm operating across an array of different sectors and services. Our LoupedIn blog aims to give readers industry insight, technical knowledge and thoughtful observations on the legal landscape and beyond.