Site icon LoupedIn

Could standards for Artificial General Intelligence save humanity?

For those interested in some lockdown listening on Artificial Intelligence (AI), we recommend an interview with Stuart Russell broadcast on the UK’s BBC Radio Four.

Russell explains the critical differences between current AI and “artificial general intelligence”. Whereas current AI can only achieve narrow tasks, artificial general intelligence might tackle any problem. Unfortunately, it might alight on single-minded, damaging solutions – or, at least, solutions that humans would wish to avoid. Russell gives the example of removing carbon dioxide from the atmosphere by turning the oceans to acid. He invokes the fable of King Midas’ single-minded and disastrous desire for gold. He suggests that we need to limit these risks, in the engineering of AI, by making artificial general intelligence deferential to the views of humans.

Of particular interest to lawyers, Russell discusses the roles of regulation and standards. On the premise that technology companies are “allergic” to regulation, he suggests standards would be a more welcome and effective solution. He suggests that, by analogy, bridges do not fall down because engineering standards avoid this, not because it is a legal requirement.

From our experience of technical standards, we think the dichotomy between standards and regulation may not hold up.

In any case, work is ongoing on regulation in many countries. Some of these expressly consider the risks to humanity. Draft text prepared for the European Parliament on autonomous robots recited the risk to the human species presented by AI: “whereas ultimately there is a possibility that within the space of a few decades AI could surpass human intellectual capacity in a manner which, if not prepared for, could pose a challenge to humanity’s capacity to control its own creation and, consequently, perhaps also to its capacity to be in charge of its own destiny and to ensure the survival of the species“. As passed, the resolution only recited that “AI could surpass human intellectual capacity. It did however keep a proposal to include kill switches in AI: it annexed a licence for designers of autonomous robots that would require, for example, designers to “integrate obvious opt-out mechanisms (kill switches) that should be consistent with reasonable design objectives“. The real challenge for regulation is to be fast and flexible enough to ensure safety while not stifling innovation: hence the importance of industry consultations.

The potential for technical standards relating to AI is particularly interesting from the perspective of intellectual property. We have seen some work on standardizing definitions and levels of AI, especially for automotive applications, and some discussion of standardising the use of sensors for autonomous vehicles. The adoption of standards more broadly in AI may see a role for “standard essential patents”. These have been a significant commercial aspect of previous technical standards, such as in telecommunications and audio, image and video compression, and are a particular expertise of our Intellectual Property (IP) team.

Exit mobile version