Five AI compliance conflicts for 2026-2027
I mapped the five biggest AI compliance conflicts for US-EU operators. The results were worse than expected.
Five AI compliance conflicts waiting for you if you operate across the US and EU
If you’re building or deploying AI tools across both the US and EU right now, you’re not dealing with two regulatory regimes that happen to be different. You’re dealing with two regimes that are actively moving in opposite directions, on different timelines, with different definitions of the same words. That’s before you factor in the 44 US states that have each decided to write their own rules.
I’m not going to tell you it’s manageable if you just follow the right framework. Some of it is genuinely a mess. What I can do is tell you where the sharpest conflicts are, and what the practical exposure actually looks like.
1. “High-risk” doesn’t mean the same thing on both sides of the Atlantic
The EU AI Act classifies risk by sector. If your system touches critical infrastructure, employment, education, credit, law enforcement or a handful of other defined categories listed in Annex III, it’s high-risk and subject to conformity assessments, technical documentation and EU database registration. That’s the test, and it’s binary.
US state laws, particularly Colorado SB 205, classify risk by the decision being made. Colorado focuses on “consequential decisions” affecting housing, insurance, employment and legal services, but the test is whether the decision materially affects a consumer, not which industry you’re in. California is building something similar, though the exact shape keeps shifting.
The practical problem: a system that sits below the EU threshold can still trigger Colorado’s requirements, and vice versa. You can’t map one risk classification onto the other. If you’re trying to maintain a single risk register that covers both, you’ll either over-classify everything (expensive, slow) or under-classify something and get caught. Most teams I see are just defaulting to the EU standard and hoping it’s conservative enough to cover US obligations too. Sometimes it is. Sometimes it isn’t.
2. Who’s liable depends on which side of the ocean you’re on
Both the EU and US state laws draw a line between providers (the people who build AI systems) and deployers (the people who use them in products or services). The trouble is, they put the liability weight in different places.
The EU AI Act puts the heaviest obligations on providers. Conformity assessments, data governance requirements and the EU representative obligation all fall primarily on whoever built the system. Deployers have duties, but they’re lighter.
Colorado flips this in employment contexts. Under SB 205, deployers bear the main responsibility for conducting impact assessments and ensuring the system doesn’t discriminate, even if they bought the system off the shelf from a third party. The law expects deployers to audit tools they didn’t build and may have limited visibility into.
So if you’re a US company using a European AI vendor’s hiring tool with EU employees or candidates in scope, you may find yourself legally responsible under Colorado for auditing a system whose internal workings the vendor isn’t required to hand over under EU law. That’s a real contracting problem, and most vendor agreements don’t yet address it cleanly.
3. Transparency obligations will collide with your IP protections
The EU AI Act requires deployers of high-risk systems to give affected individuals meaningful information about how automated decisions affecting them were made. For GPAI model providers, technical documentation and training data summary requirements are already in effect (since August 2025). Regulators can, in enforcement investigations, request access to considerably more.
The Trump administration’s December 2025 executive order on AI explicitly prioritises protecting trade secrets and maintaining US competitive advantage. The direction of travel is toward treating detailed model documentation as commercially sensitive, not publicly disclosable.
This creates a situation where complying fully with an EU regulator’s information request may put you at odds with the commercial and legal posture your US counsel recommends. I don’t think this is a theoretical risk. It’s already producing real tension in how US-based AI companies are thinking about their EU market documentation strategy.
4. Bias audits: the methodology problem
New York City’s Local Law 144 requires annual bias audits for automated employment decision tools, with specific impact ratio calculations published publicly. Colorado has its own statistical requirements. The EU AI Act takes a different approach entirely: it focuses on data governance at the training stage, requiring that datasets be representative and checked for bias before deployment, with human oversight built into the process.
These are not compatible methodologies. Passing an EU conformity assessment on data quality doesn’t mean your system will pass a New York City bias audit. The statistical tests are different, the thresholds are different, and the timing is different (EU is pre-deployment, NYC is annual post-deployment).
If you’re running AI-assisted hiring at any scale, you probably need both a data governance process that satisfies EU requirements and a separate annual audit process that satisfies NYC and Colorado. Two processes, two completely different definitions of what “fair” means statistically.
5. The US federal preemption gamble
This one is uncomfortable to plan around because the outcome is genuinely uncertain.
The Trump administration is actively trying to preempt state AI laws it considers obstacles to US competitiveness. The December 2025 executive order directed the attorney general to challenge state laws that conflict with federal policy. An AI Litigation Task Force is in place. Some state laws may be stayed or struck down.
At the same time, 44 states have active AI legislation, most of which is already in effect. Colorado’s high-risk AI law is scheduled to enforce from June 30, 2026 (it’s been revised twice already and may change again). California’s disclosure and safety laws are live now.
The risk of pausing your US compliance work while you wait to see how the federal preemption plays out is that you’re left exposed under laws that survive the challenge, with no documented compliance history. The risk of building full compliance programmes for every state law is that some of those laws may be struck down before you’ve finished building, and you’ve spent money on nothing.
My honest read: build against the laws that are currently enforceable, keep your documentation light enough that it doesn’t become a sunk cost if the law changes, and don’t make major product architecture decisions based on state laws that are still being revised. Colorado in particular, has been rewritten enough times in the past 18 months that treating its current text as settled would be a mistake.
What this actually means for your planning
The EU side of this, despite the deadline uncertainty around high-risk systems (the European Parliament voted in April 2026 to push those deadlines to late 2027 or 2028, pending Council agreement), is at least coherent. One regime, one enforcement body. You can build a compliance programme against it.
The US side is a different problem. It’s chaotic because the underlying political question of who regulates AI federally is unsettled, and state legislatures are filling the gap with whatever their lawyers can draft in time. More effort doesn’t fix that.
The most defensible position right now is a compliance architecture built around the EU’s risk categories, supplemented by jurisdiction-specific auditing processes for the US states where you actually have meaningful operations.
Don’t try to cover everywhere. Cover where you have real exposure, document what you’ve done and why, and treat the rest as a monitoring problem rather than a compliance problem.
