Right now, we’re at a tipping point. The Top-Ranked AI Nations – so-called TRAIN innovators and investors – are the US and China.
The fact they are both vying to be the world’s top AI economy and locked in a ‘digital cold war’ comes as no surprise. But there’s plenty going on elsewhere – the EU is leading efforts to regulate AI, while Canada is now the first nation with a national AI strategy.
As some regions tighten up on AI regulation, others are likely to be keener to lure cutting-edge companies by offering unfettered, unregulated environments, under-pinned by a sentiment of being pro-innovation.
Where does it leave the related explosion in international AI patent applications?
It’s a choppy picture. Particularly given the fact that AI researchers from both North America and China frequently co-operate. Open-source applications are out there for all to use, even with licensing often accessible for the most cutting-edge models. The private sector is the driving force behind AI in North America, with its share in emerging AI innovation jumping from around 10% in 2010, to 96% a decade later in 2021.
Right now, the Chinese government plays a big role. The state has issued significant subsidies, support, and policy guidance to broadly direct researchers towards AI and patent filings. China is the largest filing country in GenAI.
There’s clearly a lot at stake in every jurisdiction.
What’s in a picture?
Closer to home in the UK, the ongoing dispute between global photo agency Getty Images and Stability AI has the potential to significantly shape the future relationship between AI and copyright.
Stability AI has developed several generative AI applications, including its ‘Stable Diffusion’ system, which automatically generates images based on users’ text or image inputs. Getty Images claims Stability AI infringes its intellectual property rights in two ways – the copyrighted images selected as source material data inputs for developing Stable Diffusion, and the subsequent AI-generated images that Stable Diffusion outputs.
Getty claims they are fundamentally synthetic images reproducing a substantial part of its copyrighted works which are clearly badged with the Getty brand. It also claims that Stability AI is responsible for secondary copyright infringements, alleging that Stable Diffusion was imported into the UK without authorisation, when it was added to AI image developer platforms GitHub, HuggingFace and DreamStudio.
Infringement of database rights, trademarks and breaches around the law of passing off, are also alleged.
Stability AI denies that any of the alleged infringing acts have taken place in the UK.
A picture really can be worth a thousand words
Stability AI has filed different defences to each of the claims, notably by making a clear distinction between user outputs from text prompts when compared to those arising from user image prompts.
For instance, it is claimed synthetic images generated from text prompts are ‘created without use of any part of any copyright works.’ Regarding image prompts, Stability AI puts responsibility in the hands of system users, who it states can control the degree to which Stable Diffusion will transform an input image when it comes to generating the output image.
Stability AI claims that any copying of input images without a licence is ‘unequivocally an act of the user alone.’ It also claims, ‘output images are in substance and effect partial reproductions of the input image provided by the user and any resulting act of copying is that of the user alone’.
Can the pastiche exception provide an escape route?
Stability AI’s second-string defence hinges on the pastiche exception to copyright infringement, covered by the Copyright, Designs and Patents Act 1988.
Stability AI insists that its work falls within Section 30A of the Act, which provides that fair dealing with a work for the purposes of caricature, parody or pastiche does not infringe copyright. Given that there is very little case law on what constitutes ‘pastiche’, it will be interesting to see the court’s determination on this point alone.
Inevitably the debate will continue until next summer’s High Court trial. It’s a case that is likely to be eyed carefully from across the pond.
Analogies may be drawn between the arguments Stability AI is raising regarding the pastiche exception to copyright in the UK and the extent to which the ‘fair use’ limitation on copyright infringement in the US can be applied – if the outputs of generative AI systems simply ‘mimic’ copyright content input to those systems.
US courts are already set to scrutinise the issue of fair use in a lawsuit brought by the New York Times (NYT) against OpenAI and Microsoft, relating to the software giant’s generative AI systems. The NYT has accused OpenAI and Microsoft of seeking to ‘free ride’ its investment in its journalism, by using its published content to create material without permission or paying a fee. It is alleged OpenAI and Microsoft copies NYT work breaching the ‘fair use’ definition under US copyright law.
In response, OpenAI’s arguments include that the claim is time barred by the statute of limitations and that it did not have actual knowledge of the alleged infringing activities. OpenAI also insists its conduct is in fact protected by ‘fair use’ because the unlicensed use of copyrighted content to train generative AI models serves a new and transformative purpose.
Elsewhere, more conciliatory and co-operative arrangements are in place. Late last year, German publisher Axel Springer and OpenAI negotiated a deal allowing the developer to train its AI systems using Axel Springer content. While it highlights the potential for co-opted partnerships, the outcome of ongoing disputes between global content creators like Getty Images and the NYT, will determine the future rules of engagement between rights holders and AI.
The impact of AI on the patent landscape
The explosive global AI race is increasing at pace. It’s hugely complex, particularly given the technology’s potential reach and impact. As the technology advances, policy develops and regulation is debated, it will be interesting to see whether any international consensus emerges, or we are faced with a fractured, regional approach to the balance between developers, rights holders and the public interest.
Kate Swaine is Head of IP for Gowling WLG’s UK practice. She is part of the panel at the INTA 2024 Leadership Meeting in New Orleans in November, considering the current status of AI and copyright. Fellow Gowling WLG Partner Jayde Wood will also be a speaker-panellist. She is discussing ethical and practical considerations behind adopting AI technologies in day-to-day trademark practice.
About the author(s)
Gowling WLG is an international law firm operating across an array of different sectors and services. Our LoupedIn blog aims to give readers industry insight, technical knowledge and thoughtful observations on the legal landscape and beyond.