The AI Opportunities Action Plan is a huge blow to the UK's creative industries. Unfortunately, Labour seems to have made up its mind.
The AI Opportunities Action Plan, commissioned by the UK government and written by Matt Clifford, was published today. It’s a 25-page document that outlines a plan for how the UK can “embrace the transformative potential of AI” in a bid for economic growth, in the form of 50 recommendations.
AI does have the potential to boost economic growth, and I agree with a lot of what Matt recommends. Investment in compute resources, support for AI education, attracting AI talent from overseas; these are all sensible ideas. But the recommendation on copyright law is, I think, hugely misguided, and will seriously harm the UK’s creative economy if it is implemented.
Unfortunately, the Labour government has said it will “take forward all 50 recommendations”. So the Plan’s recommendation on copyright law will be enacted.
Here is that recommendation:
Reform the UK text and data mining regime so that it is at least as competitive as the EU. The current uncertainty around intellectual property (IP) is hindering innovation and undermining our broader ambitions for AI, as well as the growth of our creative industries. This has gone on too long and needs to be urgently resolved. The EU has moved forward with an approach that is designed to support AI innovation while also enabling rights holders to have control over the use of content they produce. The UK is falling behind.
The UK government, then, is committed to reforming copyright law to favour AI companies.
We knew the government had proposed this, but they had up until now framed it as just a proposal, out for consultation. (Unsurprisingly, early feedback has consisted of immediate, strong rejection of the proposal by the creative industries.) We now see openly what many suspected - they have already made up their minds, and committed to change copyright law.
Given that the government has already decided to take this recommendation forward, there is perhaps little point arguing - the time for debate is apparently over. But Matt’s recommendation is based on a number of misunderstandings. I want to outline them here, along with what I think is a better path for the UK.
Misunderstanding 1: “The current uncertainty around intellectual property (IP) is hindering innovation and undermining our broader ambitions for AI”.
There is no uncertainty around current copyright law in the UK. As I’ve written before, there is no copyright exception in the UK for commercial generative AI training. That is, the law is currently clear: you need to license the data you train on. This is made clear in the 1988 Copyright, Designs and Patents Act, which only creates a text data mining (TDM) exception for non-commercial research (at 29A):
This is widely accepted in legal circles:
It is hard to find anyone - even AI companies - who argues that commercial generative AI training on copyrighted work without a licence is currently legal in the UK. As Lord Clement-Jones has said, the government’s proposal is “based on the mistaken idea, promoted by tech lobbyists and echoed in the consultation, that there is a lack of clarity in existing copyright law”. There is no uncertainty.
Several members of government have recently cited this apparent uncertainty around copyright law. It is hard to see it as anything other than intentional misdirection, deployed in order to justify changing the law (as it is much more palatable to change the law if existing law is unclear). It is disappointing to see it repeated in the Action Plan.
Misunderstanding 2: “The current uncertainty … is hindering … the growth of our creative industries.”
If you believe that industries themselves are the best judges of what helps or hinders their growth, then this is false.
The creative industries are adamant that a change to copyright law of the kind that the UK government and the Action Plan are suggesting would be disastrous for them. A group representing thousands of creators, the Creative Rights in AI Coalition, are definitive on this:
“Rights holders do not support the new exception to copyright proposed”.
Owen Meredith, CEO of the News Media Association, has called the government’s proposal “unworkable”. Kate Mosse has written:
“stealing intellectual property is an assault on creativity and copyright, and will undermine the UK’s world-leading creative economy.”
Baroness Kidron, widely supported by the creative industries, has said:
“The government is consulting on giving away the creativity and livelihoods of the UK creative sector which is worth £126bn a year”.
And 39,000 people, including many British creators, have signed a statement saying the unlicensed use of copyrighted work for AI training - precisely what the government is proposing allowing - “is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted”.
The UK’s creative industries account for 5% of GDP, and are respected the world over. Current copyright law is not hindering their growth. The government should listen to what they’re saying - which is that copyright law must not be changed - rather than thinking it has a better idea of what is best for them.
Misunderstanding 3: “[The EU’s approach enables] rights holders to have control over the use of content they produce”.
The EU has adopted an opt-out regime for generative AI training, meaning that rights holders must actively opt out if they don’t want AI companies training on their works. However, as has been widely discussed, opt-out schemes of this sort do not give rights holders real control. As I’ve written elsewhere:
But opt-out schemes for generative AI training do not let rights holders successfully opt out their works. This is because opt-out schemes only let you opt out the works where you control them — they don’t let you successfully opt out downstream copies of your works. These copies are out in the wild — you have no control over them. A photographer’s image being used in an ad; a journalist’s article being screenshotted and shared online. The creative industries are built on downstream copies.
URL-based opt-out schemes only opt out specific URLs from training, but you can only opt out works at URLs you control. Metadata-based schemes add information to files themselves, but this information is often and easily removed, and some media types (e.g. text) cannot have metadata added. The best hope for a working solution is automatic content recognition (ACR) — some centralised repository of opted-out content that is scanned at point of training — but ACR technology is woefully inadequate for these purposes, and is particularly unhelpful for copyrighted works that are themselves embedded in other works.
Here is one example. I’m a composer, and recordings of my music exist in various places online, out of my control. I have no way of opting these out of training using any existing or hypothetical opt-out scheme. And there are millions of examples like this across the creative industries.
People in government have said the “technology has moved on”, suggesting it will somehow be possible as a rights holder to successfully keep your works from being trained on. I’ve run opt-out schemes at generative AI companies myself, and I am confident they are mistaken. The most widely-used opt-out scheme, robots.txt, is totally unfit for purpose — it gives rights holders no control whatsoever over whether downstream copies of their works are trained on. There are others, but none come close to solving the downstream copies problem. No one has even suggested a hypothetical opt-out scheme that would solve this. At the very least, no change to copyright law should be made that relies on opt-outs until an effective opt-out system is built and rigorously tested.
The government’s apparent determination to run this consultation, and presumably change the law, before such a system exists, is extremely worrying.
Even if some solution to the downstream copies issue can be found — which is highly unlikely — opt-out schemes are incredibly unfair to creators and rights holders.
There are many reasons for this. There is the fact that all the data suggests <10% of people eligible to opt out actually do, because many don’t realise they have the chance (consider recent research showing that 60% of artists still don’t know about robots.txt), and those who do face a huge administrative burden. There is the fact that the effect of opting out is nowhere near immediate: opt-out schemes don’t tend to impose deadlines for existing models being retrained and/or retired, meaning models are often live for months or even years after a rights holder has opted out. There is the fact that opt-out schemes are disproportionately unfair for small creators, who are that much less likely to understand their rights and have the bandwidth to go through the opt-out process, despite being precisely the people who need our protection the most. I’ve gone over these and other reasons opt-outs are unfair in this essay.
I suspect no one who has run opt-out schemes of this sort would say they give rights holders control over the use of content they produce. Besides, there is already a legal mechanism for controlling use of works that actually gives rights holders control. It’s called copyright.
To put it simply, the EU should never have adopted opt-out as part of the AI Act. The European creative industries are rightly sounding alarm bells. I suspect it was adopted because legislators were lobbied hard by AI companies, for whom an opt-out scheme is a fantastic outcome, and the unworkability of opt-out schemes was less well understood than it is now. Either way, it will be looked back on as legislation that was rushed through too fast, and that in hindsight was clearly hugely unfair to creators.
On top of all this, many expect there will be legal challenges to the EU AI Act’s opt-out provisions, since they may contravene the Berne Convention, which requires that any copyright exception adopted by signatory countries
does not conflict with a normal exploitation of the work and does not unreasonably prejudice the legitimate interests of the author.
It is hard to see how a copyright exception that lets AI companies use works to compete with the creators of those works, offering an opt-out that will inevitably be missed by most, does not unreasonably prejudice creators.
So the EU’s model does not give rights holders control over the use of their works in AI training, and should not be emulated. We should not simply copy legislation that was rushed through too fast and is already known to be unworkable and unfair. We can, and should, do better.
A better path for the UK
We do not need to change copyright law to be a leader in AI. Much world-changing AI, such as Sir Demis Hassabis’ and DeepMind’s work on AlphaFold, is not built using the world’s creative work. And those AI systems that do require copyrighted work should license that work, as is currently required by law. This is not just desirable; it is already happening. The market for licensing training data is thriving, and a number of AI companies already license all of the work they train on.
The change to copyright law the government has proposed, echoed in this Action Plan, would hand most of the country’s creative work to AI companies, for free, for them to use to outcompete the British creators who made that work. (The opt-out scheme would be little used, and looks awfully like a cover for getting as much work into AI companies’ hands as possible.) This cannot be acceptable to anyone who values the UK’s creators, and who wants our creative industries to continue to prosper.
We can lead in both AI and creativity. The way to do this is to maintain fair copyright laws, and to double down on being the home of responsible AI development, rather than legalising the mass theft of the country’s creative work.
But the debate is apparently over.
I think Matt’s Action Plan contains much to be admired. It makes sense for the British state to support the country adapting to the AI age. But recommendation #24, to reform copyright to favour AI companies at the expense of the creative industries, is a huge own goal for the country. It is a gift for AI company CEOs, who have played fast and loose with copyright law in the hope of getting too big to be regulated. You just have to look at the roster of AI companies lining up to praise the plan - many of which are being sued for copyright infringement on a truly vast scale - to see this.
Unfortunately, the decision seems to have been made. Labour will implement this recommendation, and change copyright law.
I suspect this will be remembered by creators for a very long time.
Brilliant commentary, Ed. I’ll also be quoting you and Graham Lovelace in an article I’m writing. Let’s also not forget the rhetoric vs. reality macroeconomic backdrop of this Action Plan - grandiose plans and no specifics about how all this will be funded. It’s a tech-giant heist that’s full of A-irony. Let’s keep banging the drum as I know a lot of creatives who seem blissfully unaware that their work is about to be raided.
Excellent post Ed ... I'll be quoting you in my coverage soon.