Living with artificial intelligence: can the law cope?

As part of our Thought Leaders series, we explore the copyright issues arising from AI-generated content and the status in patent law of AI-generated inventions.

Forward features discuss and celebrate the best of innovation and exploration from the scientific and entrepreneurial worlds.

Having loomed over the horizon for several years, artificial intelligence (AI) has now hit society with substantial force. Every aspect of our lives seems to be subject to it – from serious concerns about the impact of AI on matters of national security to light-hearted gimmicks such as the craft beer product of Nethergate brewery’s AI collaboration ‘AiPA’ (Artificially Intelligent Pale Ale)[1] – and the media cannot get enough of reporting its implications. The description ‘transformative’ barely does justice to what we are seeing as businesses and the general public alike are awed by its potential as a power for both good and bad.

For regulators and lawyers, AI represents an almost unprecedented challenge. The speed of its impact and rapidity of its development have generated deep questions both practical and philosophical, not least because society is now split between those who are thrilled by its possibilities while others issue dire warnings about AI’s downside.

PwC in its ‘Sizing the Prize’ report[2] for example, estimates that AI could add $15.7tr to the global economy by the end of the decade. But writing in The Times recently, Ed Newton-Rex declared, in relation to AI, that ‘The greatest art heist in history is happening right in front of us, and we are being gaslit by its perpetrators...’[3]

It is this ambiguity between the possible abuses and the probable benefits that is giving the lawmakers and the regulators such a headache. Traditional legislation, often written long before AI in its current guise was even conceived, seems no longer fit for purpose. So, across the jurisdictions of Europe and the US (as well as Asia) a critical process is now under way to examine whether current laws can still be regarded as relevant to the emerging AI-shaped landscape and, if not, what changes should be made. It’s a tricky challenge given that the technology is so dynamic.

Many of the urgent concerns regarding the use of AI relate to safety risks, cybersecurity threats and privacy rather than intellectual property (IP). These issues are being addressed at international summits such as the G7 Hiroshima Summit[4] where the leaders of the Group of Seven met to find ways of furthering international discussions on inclusive AI governance and interoperability to achieve trustworthy AI, in line with shared democratic values, and the AI Safety Summit where 28 countries and the EU signed up to the Bletchley Declaration[5] and ten countries and nine AI developers signed up to a safety testing agreement[6] requiring tests on next-gen AI models before and after deployment.

However, while there seems to be an internationally joined-up approach to AI regarding these security-critical matters, at least in terms of petitions of principle, there remains certain ground to cover to address them. For instance, the report prepared by the OECD in view of the Hiroshima Summit points out that IPRs infringements are the second greatest risk posed by AI. This risk is considered along with those derived by privacy threats resulting from the deployment of AI systems[7].

In the case of AI-generated content, controversy has been bubbling away for some time. Just to take one example, the Italian Data Protection Authority (Garante per la protezione dei dati personali) last year sent OpenAI (the US-based AI research organisation) a notice informing it of alleged breaches of data protection regulations, claiming that it did not have a lawful basis for processing large amounts of personal data for training ChatGPT. This challenge strikes at OpenAI’s business model. The fact that so fundamental a point is under debate demonstrates how far we have to go.

Therefore, there are immediate areas of concern in the world of intellectual property (IP) including in particular the copyright issues arising from AI-generated content and the status in patent law of AI-generated inventions.

The position in the UK: Copyright

In the UK the key piece of legislation is the Copyright, Designs and Patents Act 1988 (CDPA). As might be expected with any legislation whose subject matter develops and evolves more quickly than even the most agile legislative process, by the 2010s the CDPA 1988 was already looking due for an update and in 2014 the Act was amended[8] to include a text and data mining (TDM) exception that went beyond the existing research exemption and permitted the use of previously protected material in computational analysis. The exception was limited to research and non-commercial purposes only and was permitted solely where the copy was also accompanied by sufficient acknowledgement (where practical). This put the UK ahead of many of its international peers and gave a significant competitive advantage for those carrying out computerised analysis of large-scale data sets in the UK. Within a couple of years, however, the rest of the world started to catch up. With the introduction of the EU Directive on Copyright in the Digital Single Market[9] (CDSM) in 2019, the EU implemented its own, broader, TDM exception that permitted data mining for any purpose – including commercial – unless this was expressly forbidden by the authors. Now that the UK is no longer a member of the EU, the remaining member states passed national legislation to give effect to the CDSM and the TDM exception. This opened the door to a massive growth in the exploitation and use of ‘open source’ and ‘commons’ material, and although the UK had been ahead of its time from the perspective of those seeking to make use of copyright material, it had now fallen behind.

The UK government carried out a consultation on AI and IP: copyright and patents[10] (published in June 2022), which proposed a permissive amendment to our TDM exception, bringing it in line with the EU and much of the rest of the world. In December 2022 the government published a new national AI strategy[11] but by February 2023 the machine learning exemption (regarding text and data mining) from copyright infringement was abandoned. Undeterred, the government issued an AI White Paper Consultation[12] in March 2023, designed to achieve a ‘pro-innovation approach to AI regulation’. This stated that the government was keen to take an ‘agile and iterative’ approach to AI based on a framework underpinned by five principles[13] rather than any new legislation. Following this White Paper, in June 2023 the government proposed a voluntary code of practice intended to be acceptable to all the interested parties, including in the creative industries.

Unfortunately, it did not get the hoped-for reception.

Earlier this year the government confirmed that it had, indeed, failed to reach an agreement on its copyright and an AI code of practice between developers and rightsholders. There had been a massive backlash against the proposals by rightsholders; academic experts; and industry bodies including the Creative Rights Alliance (CRA)[14], which represents more than 500,000 creator members, several million individuals, creator-led groups, and trade associations and unions across the UK. This powerful opposition argued that; ‘without creators’ rights to copyright protection over the works they create there is little incentive to invest in their own future careers’[15]. Without a provision in the amended TDM exception to provide for such incentive, the government backed down. It was always going to be difficult to settle on a mutually agreeable baseline for a single, unifying code of practice but now that it has been abandoned the government has asked UK-based regulators whose activities are likely to be impacted to publish their own updates outlining their strategic approaches to AI by 30 April 2024. The UK Information Commissioner’s Office released its response on 1 May 2024 ‘Regulating AI: The ICO’s Strategic Approach’[16], so it will be particularly interesting to see what these regulators come up with and whether any common themes emerge, such that it could be possible to settle on a collective approach.

An interesting proposal was made in the shape of a Private Members’ Bill (the Artificial Intelligence (Regulation) Bill)[17]. Lord Holmes of Richmond presented the bill in the House of Lords late last year and just this month it passed and was sent to the House of Commons[18]. The bill includes the concept of a new ‘AI Authority’ which would require those making any use of copyright works in training AI systems to provide a record of all third-party data and intellectual property alongside confirmation of all appropriate consents to their use. While this is unlikely to satisfy the creative industries, it goes some way towards providing a framework for requiring increased transparency when it comes to what can otherwise be a largely anonymous process of harvesting online data and copyright works for their unattributed and unacknowledged use in large-scale analysis and AI training. However, while an interesting idea, as with many Private Members’ Bills, it is unlikely to have any practical impact beyond raising awareness and applying pressure on the government to process the matter. Given that AI is already one of the most high-profile issues faced by society in recent years and the government has been clear that it wishes to avoid legislating on AI, its future is unlikely.

Overall, the AI regulatory landscape when it comes to copyright is uncertain so it may be the courts and case law that we turn to for guidance on issues such as infringement and whether copyright arises in AI generated works. Of particular interest is the current dispute between Getty Images and Stability AI, where the global picture library has been trying to fight off what it sees as the infringement of its material for the purpose of training Stability’s AI. Getty has deep enough pockets to pursue this multi-jurisdictional case to the end. If the parties don’t settle, the outcome could mark a critical moment in the unfolding story of AI and copyright.

In the meantime, all that can be said is that there is a growing consensus on both sides of the debate about the need for ‘transparency’.

This is certainly reflected in the approach taken by the EU in its EU AI Act On one hand, the creative industries are keen to hold out for transparency on the basis that this will pave the way for acknowledgement and remuneration. On the other hand, the practical implications of implementing transparency obligations in respect of the use of online data and copyright works in analysis and AI training are significant and there are few realistic solutions that do not pose an onerous burden on developers. It is little wonder we are at an impasse. However, if agreement is to be reached then transparency is likely to be at the heart of it.

The position in the UK: Patents

AI-Generated Inventions

Meanwhile, in the field of AI-generated inventions the UK courts have provided the stage for a series of pioneering and precedent-setting cases, establishing the boundaries of what AI can or rather what it can NOT do. You will more than likely have heard of ‘DABUS’ the AI device and Missouri-based inventor and AI researcher, Dr Stephen Thaler’s mission to test the limits of intellectual property law with respect to AI.

Dr Thaler developed an advanced AI device called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience). He then used DABUS to generate various inventive concepts (including the subjects of the two patent applications at the heart of the dispute, filed in the UK for (1) a fractal geometry-based interlocking food container; and (2) a device employing a form of flashing light for attracting enhanced attention) and sought to secure patent protection in various jurisdictions for these inventions. However, rather than naming himself as the inventor in these patent applications (on the grounds that he had used DABUS as a tool to generate potential inventive concepts), instead Dr Thaler named the AI as the inventor and himself as the applicant and person entitled to the inventions by virtue of his ownership of DABUS. The applications were immediately rejected by the UK Intellectual Property Office on the grounds that (a) an AI machine could not be regarded as an inventor for the purposes of UK patent law (and thus, the patent applications lacked a valid inventor) and (b) Dr Thaler’s ownership of DABUS, did not establish a chain of entitlement to the inventions generated by DABUS[19]. As a result, he lacked the legal title to pursue the patent applications. Dr Thaler appealed the decision to the Patent Court, the Court of Appeal and ultimately to the Supreme Court but these appeals were unsuccessful and the highest court in England and Wales held that a patent cannot be granted where the named inventor is not a natural person. Hence recognition could NOT be given in this way to a non-person piece of kit no matter how intelligent it seemed to be. End of story – at least for the time being.

Of course, there is still plenty of scope for submissions to be made for inventions in which AI has played a part – maybe a significant part – but the inventor must still be a named individual. The US Patent and Trade marks Office has recently issued guidance on this very subject, considering the position where contributions to inventions have been made by AI devices and how this should be approached in the context of patent protection.

While it seems the position on the legal – natural person – status of an inventor is unlikely to alter, there is still scope for many other things to change in the future. As Lord Justice Elisabeth Laing commented in the Court of Appeal reflecting on that Patent Act dating back over forty years,

‘Whether or not thinking machines were capable of devising inventions in 1977, it is clear to me that that Parliament did not have them in mind when enacting this scheme. If patents are to be granted in respect of inventions made by machines, the 1977 Act will have to be amended.’

The upshot of these developments is that what comes next in the field of AI and IP – specifically copyright and patents, is uncertain and open to speculation – the only thing we have any certainty on, is that everyone will be watching this space intently.

 


 

References

  1. https://nethergate.co.uk/aipa-artificially-intelligent-pale-ale
  2. https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelligence-study.html
  3. https://www.thetimes.co.uk/article/ai-companies-swipe-work-and-jobs-60wlzlpsd
  4. https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/20/g7-hiroshima-leaders-communique/, see paras 38 and 39 (Digital)
  5. https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023
  6. https://www.gov.uk/government/publications/ai-safety-summit-2023-chairs-statement-safety-testing-2-november/safety-testing-chairs-statement-of-session-outcomes-2-november-2023
  7. OECD Report dated 7 September 2023, G7 Hiroshima Process on Generative Artificial Intelligence (AI) Towards a G7 Common Understanding on Generative AI at https://www.oecd-ilibrary.org/docserver/bf3c0c60-en.pdf?expires=1708255043&id=id&accname=guest&checksum=65C30EE162685E778A94FCD4C18F88EF
  8. Directive 2001/29 on the harmonisation of certain aspects of copyright and related rights in the information society (InfoSoc Directive – implemented in the UK by the Copyright and Rights in Performances (Research, Education, Libraries and Archives) Regulations 2014/1372 reg. 3(2) (1 June 2014)
  9. Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market
  10. https://www.gov.uk/government/consultations/artificial-intelligence-and-ip-copyright-and-patents/outcome/artificial-intelligence-and-intellectual-property-copyright-and-patents-government-response-to-consultation
  11. https://www.gov.uk/government/publications/national-ai-strategy
  12. https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper
  13. (1) Safety, security and robustness; (2) Appropriate transparency and explainability; (3) Fairness; (4) Accountability and governance; and (5) Contestability and redress.
  14. Coalition of major trade associations, unions and creator led groups which exist to promote, protect and further the interests of creators through advocacy and campaigning, including fair contracts, pay, working conditions and intellectual property.
  15. https://committees.parliament.uk/writtenevidence/111035/html
  16. https://ico.org.uk/media/about-the-ico/consultation-responses/4029424/regulating-ai-the-icos-strategic-approach.pdf
  17. https://bills.parliament.uk/bills/3519
  18. https://hansard.parliament.uk/lords/2024-05-10/debates/197D481F-23DF-4768-943B-7E25BE1AA6B4/ArtificialIntelligence(Regulation)Bill(HL)
  19. Here an interesting argument on the doctrine of accession was employed. This doctrine states that where a new item of tangible property is generated by an existing item of tangible property, the owner of the original property will own the new property. For example, where a farmer owns a cow and that cow produces a calf – by the doctrine of accession, the farmer will also own the calf. In the case of the workings of DABUS however, it was found not to apply.

 

This Forward feature was originally written by Edward Fennell and then adapted by Emma Kennaugh-Gallacher and Alessia Dalla Libera.