Site icon LoupedIn

Stuart Russell on AI Regulation

The first of Professor Stuart Russell’s “Reith Lectures” for the BBC was recorded tonight. The lecture series, “Living with Artificial Intelligence“, will focus on the profound and potentially dangerous impacts of AI and will discuss how we might control AI to ensure it acts for our good. Later lectures will explain the need to develop AI able to understand the nuances of real-world goals – whether or not we manage to articulate such goals or our underlying assumptions (a theme explored in Russell’s book, Human Compatible). Unsurprisingly, questions from the audience drew Professor Russell towards the issues of the regulation of AI.

Regulating artificial general intelligence

Professor Russell spoke about the potential future dangers of an artificial general intelligence (AGI). AGI would not require consciousness (as in The Terminator or Ex Machina) to be dangerous – it would merely need to follow a goal that had been poorly chosen by a human. This is a scenario explored since antiquity, from the myth of King Midas to The Sorcerer’s Apprentice (for which Russell cited Goethe, not Disney).

Russell is content that AGI is not likely in the next few years but warns that a sudden breakthrough should not be ruled out. Moreover, the risks of AGI are so great that current regulatory attention is justified.

He is clear, however, that there is, as yet, no technical solution to the problem of control he identifies. Regulators could forbid the use of current AI methods but could not point to safe alternatives. Russell suggests we need to rethink the last sixty years of AI research.

Regulating current artificial intelligence

More immediate, then, is the potential to regulate against the harms of current AI. Professor Russell is clearly pleased with the lead taken by the European Union on regulating AI (see the proposed regulations here). He called for regulation “with teeth”. In particular, he is keen for regulations against the use of AI to impersonate a human. Russell considers this a “lie” – whether the impersonation is to commit a crime (e.g. to trick someone into revealing a password), to generate fake news or for commercial ends – and suggests the law should not allow it.

Professor Russell is particularly horrified by the promotion of political extremism and fake news by social media algorithms. He described this (in the discussion following the lecture) as worse than the Chernobyl disaster. He did not propose any specific regulatory solutions. Rather, he invites social media platforms to share their data so that researchers can understand the problem. He believes (or hopes) that the new generation of AI specialists in tech companies is more concerned to ensure AI is used for public good.

He also warned against countries approaching the development of AI as a race. Professor Russell considers that a race is unnecessary because the potential rewards will be so great that no country would need to keep the benefits to itself. And, from a regulatory perspective, a race is dangerous because it encourages cutting corners.

The lecture will be broadcast on 1 December 2021. A German film crew were also recording throughout for a documentary on Professor Russell. For an earlier discussion of Professor Russell’s views and the potential for standards for AI, see here.

Matt Hervey is Head of Artificial Intelligence (UK) at Gowling WLG (UK) and advises on Artificial Intelligence (AI) and IP across all sectors, including automotive, life sciences, finance and retail. Find out more about Matt Hervey on the Gowling WLG website. He is co-editor of The Law of Artificial Intelligence (Sweet & Maxwell).

Exit mobile version