The World Intellectual Property Organization (WIPO) held its second “conversation” on Artificial Intelligence (AI) and IP last week. In the run up it published a revised issues paper (discussed in our blog ‘WIPO’s revised paper on IP policy and AI‘). One of the additions to the paper was a definition of AI:
“a discipline of computer science that is aimed at developing machines and systems that can carry out tasks considered to require human intelligence, with limited or no human intervention“
It’s no easy or trivial matter to define AI and the detail of any definition may affect the scope of policy, regulation and law. The White House (or more precisely the Executive Office of the President National Science and Technology Council Committee on Technology) captured the range of possibilities in its 2016 report Preparing for the Future of Artificial Intelligence:
“There is no single definition of AI that is universally accepted by practitioners. Some define AI loosely as a computerized system that exhibits behavior that is commonly thought of as requiring intelligence. Others define AI as a system capable of rationally solving complex problems or taking appropriate actions to achieve its goals in whatever real world circumstances it encounters.“
The upper end of that range appears to be limited to what is known of artificial general intelligence – systems that can handle any scenario, not just the “narrow domain” that current AI is able to tackle. Artificial general intelligence is still the stuff of science fiction. Indeed, many and perhaps all humans would struggle to always meet the definition of rationally solving complex problems or taking appropriate actions to achieve their goals in whatever real world circumstances they encounter.
The definitional challenges were discussed in 2016 by Matthew Scherer in ‘Regulating Artificial Intelligence Systems’ (Harvard Journal of Law and Technology) who suggests the problem lies in the lack of an adequate definition of intelligence. He quotes Dr John McCarthy, a leading AI pioneer and possibly the coiner of the term “artificial intelligence” itself. McCarthy said there is not yet any solid definition of artificial intelligence that does not depend on relating it to human intelligence: “The problem is that we cannot yet characterize in general what kinds of computational procedures we want to call intelligent. We understand some of the mechanisms of intelligence and not others“.
Given these challenges, the wording adopted by WIPO seems a pithy and effective definition. Indeed, in just a few words the WIPO definition encapsulates many of the challenges of AI:
- The definition relates to tasks “considered” to require human intelligence. What is “considered” to require human intelligence is a moving frontier. Moreover, as any feat of computing power become commonplace, it is no longer considered exclusive to human intelligence (and no longer remarkable enough to be considered worthy of inclusion in any special category of computing.) No-one thinks of the pocket calculator as AI in any important sense.
- The focus on tasks normally requiring human involvement foreshadows the economic and social impacts of widespread adoption of AI: machines performing once human tasks more cheaply, faster, reliably and tirelessly and consequent changes in human employment, including the end of some jobs and, hopefully, the creation of new jobs.
- The definition relates to carrying out once human tasks but is not limited to computers performing such tasks in the same way as humans. Much of the regulatory concern with AI arises from just how differently it may approach a task, including the risks of AIs reaching decisions in ways that defy accountability (e.g. because they do not produce a trail of evidence or because their decision making may lack transparency and explainability).
- The definition includes AI performing tasks “with limited or no human intervention”. This is also a significant regulatory concern: AI may act autonomously and some may autonomously change their behaviour over time. This makes current models of liability hard to apply and has been the subject of considerable regulatory attention, such as the European Commission’s report Liability for Artificial Intelligence and other emerging digital technologies.
In short, AI raises social, ethical, regulatory and legal challenges by definition. Gowling WLG has a cross-practice and cross-sector team of AI specialists to advise on these challenges.
About the author(s)
Gowling WLG is an international law firm operating across an array of different sectors and services. Our LoupedIn blog aims to give readers industry insight, technical knowledge and thoughtful observations on the legal landscape and beyond.