Just when you thought the AI Act could not get any more bureaucratic, the European Commission has released its latest masterstroke: Guidelines on the Scope of Obligations for General-Purpose AI Models. Think of it as a 30‑plus page manual explaining how to interpret the already complex legal text of the AI Act, which itself is barely a year old. And yes, there is now an official “guideline package” that will sit alongside the already mandatory template for training data disclosures and the code of practice for AI providers. We have entered the age of regulation of regulation.
A Rulebook to Tell You If You’re in the Rulebook
The Commission’s starting point: before you can comply, you must first determine whether you are even covered. Sounds simple? Not quite. You are “in scope” if your AI model:
- Consumes more than 10²³ FLOPs in training and
- Can generate language, images, or video … unless it can only perform a narrow task, in which case it is magically “out of scope”.
Unless, of course, the Commission decides otherwise. That means training compute, a metric very few people outside hyperscale AI labs can even estimate accurately, has become the new bureaucratic gatekeeper. If you are over the threshold, welcome to the paperwork.
The “Systemic Risk” Club
Cross the higher 10²⁵ FLOPs barrier and your model is presumed to have “systemic risk”. This is Brussels‑speak for your AI is now scary enough to require extra hoops:
- Continuous risk assessments
- Model evaluations
- Cybersecurity guarantees
- Incident reporting
You must even notify the Commission before your model is finished if you think it will cross the limit. Planning to use a big training run? Better tell Brussels in advance. Yes, really.
Providers, Modifiers, and Everyone in Between
The guidelines spend pages defining who counts as a “provider”:
- Built it yourself? You are a provider.
- Paid someone else to build it for you? You are still a provider.
- Fine‑tuned someone else’s model with enough compute? Congratulations, you too are a provider.
- Integrated a model into your app? Maybe you are a provider. Maybe not. Depends on which clause applies.
There is even an indicative threshold for when fine‑tuning makes you a new provider: more than a third of the original model’s training compute. The EU has officially quantified when a derivative model becomes yours, and your problem.
Open Source… But Not Too Open
The AI Act famously promised exemptions for open‑source models. The guidelines clarify:
- Your licence must allow access, use, modification, and redistribution, with no commercial restrictions.
- You cannot “monetise” the model in any way that limits free access, including sneaky ad‑supported hosting.
- You must publish everything: weights, architecture, usage details.
And if your open model has “systemic risk”? No exemptions. You are back in the compliance game.
A Glimpse of the Future
The AI Office, a new EU entity, will enforce all this from 2 August 2025. Compliance will be judged not only on the law, but also on these guidelines. And the guidelines themselves? Non‑binding… except they are the very framework the Commission will use to decide enforcement. This is the European regulatory machine at full throttle:
- Write a sweeping law.
- Create a dense template for disclosures.
- Draft a “code of practice” for good measure.
- Add a “guideline” to interpret the law.
- Enforce the guideline as if it were the law.
Somewhere in this process, innovation is expected to flourish.