GPAI Code of Practice: reactions on the 3rd draft and consumer perspective
The EU’s GPAI Code of Practice aims to guide AI development, but critics argue it goes beyond the AI Act, increasing compliance costs and risking exposure of trade secrets. As the final draft approaches, tensions remain between creators, industry, and regulators.
The debate around the EU’s General Purpose AI Code of Practice is nowhere near as loud as it should be. It outlines very specific obligations for the most advanced General purpose AI model developers, may set an important precedent for tech regulation, and essentially replays the core tensions of the AI Act - and of tech regulation more broadly: innovation vs. regulation, copyright vs. development, transparency vs. trade secrets, and all the usual points of contention we face in Europe.
While various industry and authors' associations continue flooding Henna Virkkunen’s office with letters about the third draft of GPAI Code of Practice, let’s take a moment to look at the issue from a consumer perspective - and explain it for those who are curious but don’t live and breathe the AI Act or AI regulation.
The genesis: why did we need the Code in the first place?
Short answer: the EU has decided to adopt a Code as a mid-stop before standards are set up. The legislators usually can't - and politically won't - cover all implementation details in a single legislative act, setting up robust standards takes time (especially for fast-evolving technologies like AI), so the Code is the third way the EU has chosen to take.
This code would theoretically serve as voluntary compliance guideline for providers of GPAI models (which display ‘significant generality’ and are capable of performing a wide range of distinct tasks - such as well-known LLM models, smaller model providers IF their models can be used for a wide range of tasks and businesses that adapt models for their own purposes).
The main point of disagreement is whether the Code follows the obligations in the AI Act - or goes beyond it. Many critics in the EU have warned that the Code of Practice goes beyond the obligations set out in the AI Act, introducing stricter rules that increase compliance costs and pose risks to trade secrets.
This matters for future precedent too - if any act or regulation can be 'hackclaused' through annexes or Codes, it risks undermining the core principle of democratic decision-making and regulatory predictability.
The AI Office has established four plus one committees focusing on: 1) commitments, 2) transparency, 3) copyright, and 4) safety and security. The final version of the Code of Practice is expected by May 2, 2025, and the Member States via the AI Board will have to make final decisions. However, given the complexity of the Code and the significant disagreements among stakeholders, this timeline appears ambitious.
Copyright: the scope, the protection of authors and the freedom to innovate
One risk associated with using AI is the potential to infringe copyright. Signatories to the Code of Practice must develop, maintain, and implement an internal copyright policy to ensure compliance with EU copyright law.
In the third Draft, Signatories are required to make reasonable efforts to mitigate this risk, the updated language is more technically neutral and focused on proportional obligations. The latest draft also removes the obligation to publish copyright policy, instead encouraging it, aligning more closely with the AI Act’s original scope.
These changes (along with others) have received opposing reactions from European authors and the European tech industry associations.
Several authors’ organizations have recently announced they will not support the third draft of the GPAI Code of Practice, published on March 11, 2025, arguing that it “misinterprets copyright law” and “sets the bar so low.”
On the other hand, industry representatives - such as the Polish Digital Association together with startup associations, and Central Eastern European industry associations - have welcomed the positive elements of the third draft. Still, yet they still emphasize that the Code “contains elements not present in the AI Act” and that “issues such as copyright, transparency, risk assessment, and governance <...> need further refinement <...> to ensure clarity and coherence.”
Complaints management
Many founders could share horror stories about complaints under the General Data Protection Regulation - stories of operations being stifled and teams bogged down in endless correspondence with Data Protection Authorities and even courts. In some cases, complaints are filed with malicious intent, and the complainants have no interest in resolving the issues.
Under the 3rd draft, signatories of the Code are still required to provide a contact point for rightsholders and establish a mechanism for submitting complaints. However, they are now allowed to reject complaints that are unfounded or excessive. A lesson learned from the GDPR experience.
Consumer perspective: all sides will have to be a little uncomfortable with the outcome
Over the past months, the European Commission has made model statements about reducing the administrative burden for SMEs which received a warm welcome from the European industry, and we all remember Mario Draghi’s remarks on digital overregulation harming European innovators. Last but not least - the EU sets big goals for its home-grown AI development and application via its AI Continent Action Plan.
While the GPAI Code of Practice may seem to concern only for “the big players” today, the fast-changing landscape and the EU’s ambitions for AI development and adoption at home will inevitably expand its relevance across the broader European industry. It’s not just about the large LLMs, it also affects smaller providers whose models can be used for a wide range of tasks, as well as businesses that adapt these models for their own purposes.
The third draft of the Code of Practice is a step in the right direction - we have to credit the AI Office and the Committees for trying to balance regulatory clarity with flexibility for AI developers.
But it’s crucial not to lose momentum and to fix the issues that may still create excessive requirements, drive up compliance costs (which will inevitably be passed on to consumers), and pose risks to trade secrets.
Most importantly, the Code of Practice is a voluntary agreement - the EU still needs the parties to sign it. Otherwise, we’ll be stuck in yet another round of alignment and adoption.
The Code sets an important precedent for the future of tech regulation, and finding a truly fair compromise will, inevitably, leave all sides at least a little uncomfortable.