Eight things the EU AI Act won't let you build, and why the rules are harder than they look
What the EU AI Act actually prohibits, and why founders in finance, HR and health need to read Article 5 carefully
What the EU AI Act’s absolute bans actually mean if you’re building AI for finance, HR or health
Article 5 of the EU AI Act contains the outright bans. Not high-risk systems with extra safeguards, not transparency obligations. The uses the Act prohibits entirely. The list covers manipulation, exploitation of vulnerabilities, social scoring, individual criminal-risk prediction, untargeted facial-image scraping, emotion recognition in workplaces and schools, biometric categorisation by sensitive traits, and real-time remote biometric identification in public spaces for law enforcement.
A year after these prohibitions became applicable, enforcement is still early, and public precedent is thin. That might feel reassuring. I’d read it differently: the enforcement machinery is still being assembled, and the first cases will set precedents that are genuinely hard to predict right now.
The eight things you can’t do
If you’re in finance, HR or health, at least two or three of these touch what you’re probably building.
An HR tool that scores candidates using inferred emotional states. A lending product that assesses individual risk using behavioural proxies. A health platform that categorises users by inferred characteristics. These aren’t hypothetical edge cases. They describe products that exist and are being sold right now.
The eight prohibited practices, stated plainly:
AI that manipulates people through techniques they’re unaware of.
Exploiting specific vulnerabilities like age, disability or economic situation.
Social scoring by public authorities.
Predicting individual criminal risk.
Scraping faces at scale to build recognition databases.
Emotion recognition in workplaces or schools.
Biometric categorisation based on sensitive characteristics.
Real-time facial recognition by law enforcement in public spaces.
Worth internalising early: the AI Act doesn’t ban technologies. It bans practices, meaning specific uses of AI systems that create unacceptable risk. The same underlying model can be fine in one context and prohibited in another. That sounds like good news until you realise the compliance question is never “is this technology allowed” but always “is this specific use, in this specific context, for this specific purpose, prohibited.” That question has to be answered repeatedly as your product evolves, not once at launch.
The GDPR problem that nobody has resolved
The AI Act doesn’t replace GDPR, and compliance with one doesn’t satisfy the other. This is the part that’s genuinely hard to plan around.
Some practices Article 5 now prohibits have previously been found lawful under GDPR in specific conditions. Denmark’s Datatilsynet approved F.C. Copenhagen’s use of facial recognition for stadium access control under a legal basis of substantial public interest. That’s a private-sector sports venue, not law enforcement, but the tension it illustrates is real: GDPR enforcement has historically asked “is this lawful under these conditions,” while the AI Act asks “is this practice prohibited regardless of conditions.” For some practices, those questions produce different answers, and there’s no binding guidance yet on what happens when they conflict.
The practical upshot: treat them as separate analyses with different thresholds and different exceptions. A lawful basis under Article 6 GDPR doesn’t make a prohibited AI practice lawful. An AI Act exemption doesn’t satisfy your data protection obligations. You need both, worked through separately.
If your product touches automated decision-making that significantly affects individuals, Article 22 GDPR belongs in that analysis too. The individual risk prediction and social scoring prohibitions in the AI Act cover overlapping territory, but the boundaries don’t line up exactly. Getting through one doesn’t get you through the other.
The in-house development problem
The Commission’s guidelines expanded “putting into service” to include internal development and deployment, covering AI you build and use yourself without supplying it to anyone else. The AI Act’s actual text defines “putting into service” as supplying a system to a deployer or for own use. The Commission read “own use” broadly enough to pull in purely internal tools.
At the same time, Article 2(8) excludes research, testing and development activities from the regulation’s scope until a system is placed on the market or put into service. If in-house development counts as “putting into service,” when does the R&D exemption end? The Commission acknowledged the tension and hasn’t resolved it.
If you’re building internal AI tools for HR or compliance functions, the safest current reading is to treat any system that processes personal data and informs decisions about individuals as potentially within scope. That may be overcautious. Getting it wrong in the other direction is worse.
Who actually enforces this
Each EU member state designates its own market surveillance authorities. The deadline was August 2025. What the enforcement landscape looks like varies by country, and the variation matters in practice.
Ireland’s February 2026 draft legislation sets up a central AI Office alongside multiple sectoral regulators: the Central Bank, Coimisiún na Meán, the Workplace Relations Commission, and the Data Protection Commission, each responsible for different parts of the framework. France gives the CNIL a role in biometric data enforcement, but AI Act enforcement there isn’t neatly divided either.
A lending product operating in Ireland and France could face scrutiny from different authorities in each country, applying the same rules with potentially different interpretations. That’s the architecture the regulation was built on, and it’s going to produce inconsistent outcomes for a while.
The penalty for a prohibited practices violation is up to EUR 35 million or 7% of global annual turnover, whichever is higher. That’s the top tier in the regulation, reserved for Article 5.
What to do with this now
Map your product against the eight prohibitions separately from your GDPR analysis. If you’re using behavioural inference, emotional analysis, or individual risk scoring in any form, that mapping needs someone who knows both regimes. The questions they ask are different enough that running one analysis doesn’t surface the gaps in the other.
For HR products specifically, the emotion recognition and social scoring prohibitions are the most likely to bite. Both are easier to stumble into than a plain reading of the law suggests.
Before launching in a new EU market, find out which authority supervises your product there. In some jurisdictions, it’s the DPA. In others, it’s a sector regulator. In some, it’s both, with responsibilities split depending on which Article 5 prohibition is at issue.
Internal AI tools don’t have a clearly defined safe harbour. If you’re processing personal data about employees or customers through internal systems that inform real decisions, document your analysis of whether those systems fall within scope. The R&D exemption has an endpoint that the Commission hasn’t precisely defined yet.
The Digital Omnibus initiative moving through Brussels is looking at amending both the AI Act and the GDPR to address some of these overlaps. The current proposal leaves Article 5 alone, though several member states are pushing to add sexual deepfakes and AI-generated CSAM to the prohibited practices list. If that passes, any product with generative capabilities needs its compliance analysis revisited.
One last practical note: the FRIA template required for certain Article 5 deployments still hasn’t been published. It’s expected this year. Until it appears, parts of the compliance path are incomplete, which matters if you’re building a timeline around EU market readiness.
