Summary: The EU AI Act is a comprehensive piece of AI regulation, but it’s hundreds of pages long. Here’s the short version: why it matters to you, your organization, and all the countries in- and outside the European Union.
↓ Go deeper (8 min)
In April 2021, the European Commission proposed the first EU artificial intelligence law. It’s a risk-based AI classification framework with accompanying policies, which entered into force on 1 August 2024, and it should be fully applicable by 1 August 2026.
The goal is to protect consumers and citizens of potential harms from AI, making sure that the systems of today and tomorrow are “safe, transparent, traceable, non-discriminatory and environmentally friendly”. These are ambitious goals and represent a major legislative achievement, but I bet many of you don’t know what’s actually in it; and I don’t blame you. I didn’t know, either.
The complete AI Act is literally hundreds of pages long, so I decided to write something more… digestible, in an attempt to better understand it myself and, hopefully, you too.
What kind of AI systems are banned in the EU?
Like I mentioned, the new rules act as a tiered framework, depending on the level of risk a system poses: unacceptable risk, high-risk, limited risk, and no risk.
The bill mentions that some AI systems pose an unacceptable risk to society. Banned applications under the AI Act include:
Cognitive behavioral manipulation of people or specific vulnerable groups, for example children (Article 5.1a, 5.1b).
Classifying people based on behavior, socio-economic status or personal characteristics (i.e. social scoring), including predicting the risk of a natural person committing a criminal offence (Article 5.1c, 5.1d).
Using AI for biometric identification and categorization of people, including real-time usage such as facial recognition in public spaces (Article 5.1e-5.1h).
One could argue that AI companions and their associated risks could fall under articles 5.1a and 5.1b, which are characterized as:
AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques (…) that exploit any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behavior of that person or a person belonging to that group.
You can’t tell me that isn’t exactly what AI companions are capable of? And how the YouTube recommendation algorithm? Are rabbit holes of ideologically congenial, extreme, and problematic recommendations considered unacceptable under the EU AI Act? Maybe, possibly, we don’t know.
As for various ways in which AI can be used to identify and classify citizens in offline and online spaces (i.e. social scoring and state surveillance), the bill includes provisions that allow for exceptions for law enforcement purposes. “Real-time remote” biometric identification systems will be allowed in a limited number of serious cases, while “post remote” biometric identification systems, where identification occurs after a significant delay, can be allowed to prosecute serious crimes and only after court approval.
What kind of systems are considered high-risk?
Articles 6 and 7 describe what the EU classifies as high-risk AI systems, which include systems used as a safety component in or around the production of consumer products, like toys, aviation, cars, medical devices, and lifts.
Also subject to the high-risk category are AI systems that fall into the following areas:
Management and operation of critical infrastructure
Education and vocational training
Employment, worker management and access to self-employment
Access to and enjoyment of essential private services and public services and benefits
Law enforcement
Migration, asylum and border control management
Assistance in legal interpretation and application of the law
High-risk AI systems will have to be assessed before being deployed, registered in an EU database, and citizens have the right to file complaints to the designated national authorities. It’s unclear how prepared those national authorities actually are.
Also worth mentioning is that the penalties for non-compliance (for both the banned and high-risk category) are severe. Companies found to be deploying prohibited AI systems face fines of up to €35 million or 7% of their total worldwide annual turnover for the previous year.
There are currently no legal requirements for applications that fall into the low and no risk categories, other than that they require technical documentation and regular risk assessments in order to prove, when asked, that they systems classify as low-risk.
General-purpose AI models (i.e. generative AI)
General-purpose AI models, like the ones powering ChatGPT, Gemini, and Claude, are seen and treated as a category of their own. Most importantly, foundation model providers will have to comply with transparency requirements and EU copyright law (which is being heavily lobbied against).
Rules include:
Disclosing that the content was generated by AI (i.e. images, audio or video, will need to be clearly labelled as AI-generated)
Designing the model to prevent it from generating illegal content
Publishing summaries of copyrighted data used for training
More advanced AI models, like GPT-4 and beyond, would have to undergo thorough safety evaluations and any serious incidents would have to be reported to the European Commission.
It’s worth noting that the EU has missed a key milestone in its effort to rein in the riskiest artificial intelligence models amid heavy lobbying from the U.S. government, as per reporting by Politico:
In a letter to the Commission in late April, obtained by POLITICO, the U.S. government said the draft rules had “flaws” and echoed many concerns aired in recent months by U.S. tech companies and lobbyists.
Suffice to say the jury is still out on that one.
AI literacy and the dangers of ‘math washing’
Last but not least, the EU AI Act makes AI literacy a legal obligation. This basically means every single organization, big and small, must actively foster AI literacy among its staff and ensure they have sufficient knowledge and understanding to make informed decisions about the use of AI and the development of any new AI-powered applications.
This is of critical importance, because irresponsible deployment can cause real-world harm; the most common one being systems riddled with biases.
Sadly, algorithmic bias often hides behind a facade of neutrality (AI in its deepest form is ‘just’ math and numbers are neutral, right?), which is sometimes referred to as mathwashing. Implemented poorly, AI can lead to discriminatory decisions influenced by not-so-neutral datasets. And if anyone needs a reminder of just how wrong things can go, may I remind you of the Dutch ‘toeslagenaffaire’, in which tens of thousands of parents and caregivers from mostly low-income families were falsely accused of fraud by the Dutch tax authorities.
If there’s one thing that the AI Act does suffer from, it’s clarity. Companies using AI must ensure that the relevant employees possess “an adequate level of AI competence”, but when this obligation is considered fulfilled remains unclear; and it too remains unclear exactly which applications will fall under which legal prohibitions.
Nonetheless, the AI Act has successfully codified into law considerable consumers and citizens protections; and will force accountability on those who want develop and deploy AI systems.
To this day, the U.S. has not passed a single, comprehensive federal AI law. Nor has Canada or the UK.
Speak soon,
— Jurgen
About the author
Jurgen Gravestein is a product design lead and conversation designer at Conversation Design Institute. Together with his colleagues, he has trained more than 100+ conversational AI teams globally. He’s been teaching computers how to talk since 2018.
Follow for more on LinkedIn.
Thanks for the clarity! It sounds quite well-considered and I especially welcome the "AI literacy" clause. What's your take?
The AI Act is a crucial step in setting guardrails for responsible AI, especially around ‘high-risk’ systems, but its rigid classification system may stifle smaller innovators who cannot afford the compliance burden (they are quite surprisingly many). While it rightly targets transparency and accountability, its one-size-fits-all approach could potentially slow down open-source AI progress, ironically hurting the very values of openness and fairness it seeks to protect.