What's Building | Thursday 26 March 2026
For founders, operators, and creators who are using AI | USA · UK · EU
On the watchlist
Things moving through regulatory pipelines that will matter in the next 3 to 6 months
🇺🇸 USA | Federal preemption and the state law showdown
Status: 🔴 Watch - The White House just picked a fight it may lose, and your state laws are now in legal limbo
What is happening
On 20 March 2026, the White House released a National Policy Framework for Artificial Intelligence proposing broad federal preemption of state AI laws. The framework directs the Attorney General to establish an AI Litigation Task Force (already active since January) to challenge state laws in federal court, particularly those requiring “alterations to truthful outputs” of AI models or imposing liability on developers for third-party misuse.
The Commerce Department published a report by the 11 March deadline identifying state laws it considers overly burdensome, with California, Colorado, and Texas specifically flagged. This isn’t legislative yet. Congress rejected comparable preemption proposals in the One Big Beautiful Bill Act and the National Defense Authorization Act. The legal strategy relies on the Dormant Commerce Clause theory, arguing that state laws place undue burdens on interstate commerce.
Why founders should care
This is the biggest structural question facing AI founders right now. If the Administration wins these cases, your patchwork compliance burden collapses into a single federal floor. If it loses, the current state-by-state regime stays, and you’re managing 50 different sets of rules. The litigation will take time (years, likely), but it affects your roadmap decisions now. If you’re operating across state lines, you’ve already been dealing with Colorado’s impact assessment requirements (now delayed to June 2026), New York City’s bias audit mandate for hiring tools, and California’s automated decision system rules.
These lawsuits won’t immediately kill state laws, but they create uncertainty that may discourage future state action. For now, assume state laws are binding and build your compliance infrastructure accordingly. The federal government’s push for preemption is a real attempt to reduce your burden, but it’s not a done deal.
So what?
If you’re building AI products: If you’re selling hiring, lending, or decision-making tools across state lines, your compliance roadmap depends on which way these lawsuits go. For now, build for the hardest standard (Colorado, California, NYC) and treat federal-only compliance as a bonus if it wins. Mark July 2026 as the decision point for whether you need a six-month delay in your roadmap.
If you’re using AI in your business: If you use AI for hiring, lending, or employment decisions, the state where your employees and applicants live matters more than federal law right now. NYC requires bias audits and advance notice for hiring tools. California and Colorado require impact assessments for high-risk systems. Colorado’s deadline moved to June 2026. Audit your tools by May 2026 to know whether you’re compliant.
If you’re advising AI companies: Tell clients that state law isn’t optional and federal preemption isn’t certain. Help them map which states they operate in and what that means for the timeline and cost. If a client is in Colorado, NYC, or California and hasn’t done impact assessments or bias audits, they’re now late.
Who feels this most
HR / Hiring: New York City’s bias audit requirement is live now for automated employment decision tools. You must publish audit summaries online, notify candidates 10 business days before use, and offer an alternative process if they request one. Illinois now allows candidates to sue directly if they believe they were discriminated against via AI hiring tools, without filing a complaint first. Colorado’s impact assessment deadline is June 2026 (moved from January). If you sell hiring tools, assume all three states are your compliance floor.
Financial Services: Credit scoring, loan approval, and fraud detection are all high-risk under state laws and will be under federal law when the preemption question settles. Multi-state banks should assume state AI laws remain binding. The Federal Reserve has updated model risk management guidance for AI systems. If you approve loans, assume you need impact assessments in Colorado, California, and any state where you originate credit.
Retail & AdTech: Algorithmic decision-making in advertising and pricing is not yet explicitly regulated at the state level, but the FTC’s Section 5 policy statement (see below) makes deceptive AI-generated content and algorithmic discrimination enforceable concerns. If you use AI to target ads or set prices, the FTC is watching.
🇺🇸 USA | FTC’s Section 5 enforcement playbook is live
Status: 🟡 Watch - The FTC just gave itself a blank check to enforce AI practices you may not know are illegal
What is happening
The Federal Trade Commission published its Policy Statement on AI and Section 5 of the FTC Act on 11 March 2026. The statement doesn’t introduce new rules but interprets the 100-year-old Section 5 prohibition on unfair and deceptive practices to apply directly to AI systems across their entire lifecycle. The FTC signals enforcement priorities: algorithmic discrimination, deceptive AI-generated content, privacy violations tied to AI data collection, non-transparent automated decision-making, and false or misleading claims about AI safety.
The statement also targets state laws that require AI models to alter “truthful outputs,” arguing they may themselves be deceptive under federal law. This creates a novel preemption argument: states can’t force you to censor truthful AI outputs because doing so would be deceptive under Section 5.
Why founders should care
This is a big shift in enforcement tone. The FTC isn’t waiting for Congress or new legislation. It’s repurposing existing authority to enforce AI practices, and it’s being explicit about what it’s watching: generated content (deepfakes, manipulated images), discrimination, transparency, and safety claims.
If you generate content, use AI to make decisions about people, train models on data you collected without clear consent, or claim your AI is safe or unbiased, you’re in the FTC’s enforcement perimeter. The Section 5 standard is broad, which gives the FTC room to act but also means the boundaries aren’t crystal clear. Expect enforcement action in the next 12 months, particularly against companies making safety or fairness claims they can’t substantiate.
So what?
If you’re building AI products: Audit your content generation, disclosure practices, and data sourcing. The FTC is watching for: generated content without labels, claims about AI safety or unbiasedness you can’t prove, and data collection practices that lacked clear user consent. If you generate images, text, or audio, label it. If you make safety claims, document them. Document data sourcing and user consent for training.
If you’re using AI in your business: If you use AI to screen candidates, approve credit, detect fraud, or make decisions affecting people, document that you’ve tested the system for discrimination. The FTC is enforcing anti-discrimination law through the AI lens. If you don’t have evidence that your system doesn’t discriminate, fix it before the FTC comes calling.
If you’re advising AI companies: Tell clients to assume the FTC will challenge any claim about AI that they can’t prove, and to treat their AI systems the way they would treat a consumer product: disclosed features, tested safety claims, and transparent decision-making. Section 5 is now an active enforcement territory.
Who feels this most
Media & Publishing: If you generate images, deepfakes, or synthetic media, the FTC is now explicitly watching for deceptive labeling and unlabeled AI content. The policy statement flags AI-generated advertising as a key concern. Label all generated content clearly. Don’t make unlabeled deepfake ads.
Financial Services: The FTC and Consumer Financial Protection Bureau are coordinating on algorithmic discrimination in lending. The FTC’s Section 5 statement makes clear it will enforce anti-discrimination law through the AI lens. If you use AI to score credit, price loans, or detect fraud, document non-discrimination testing.
HR / Hiring: Same enforcement story as financial services. If you use AI to screen resumes, conduct interviews, or rank candidates, assume the FTC is watching for discrimination. Documentation of fairness testing is now your first line of defense.
🇪🇺 EU | Digital Omnibus trilogue just started, and the deadlines are now firm
Status: 🟡 Watch - EU AI Act timelines just locked in for another 18 months, with compliance deadlines now fixed through 2028
What is happening
On 13 March 2026, the EU Council agreed its negotiating position on the Digital Omnibus (the package of AI Act amendments). On 26 March, the European Parliament confirmed its position. Trilogue negotiations (the three-way deal-making between Council, Parliament, and Commission) have now begun, with the next negotiation session scheduled for 28 April 2026.
The key outcome: fixed deadlines for high-risk AI systems are confirmed. Annex III systems (like biometrics and hiring tools) must comply by 2 December 2027. Annex I systems (foundational models and general-purpose AI) must comply by 2 August 2028. Parliament has also introduced a targeted ban on AI systems that generate sexual or intimate content without consent. The most visible disagreement is over how long companies get to mark AI-generated content (Parliament wants three months, Council wants longer).
Why founders should care
These deadlines are now locked in across EU institutions, which means they’ll likely survive trilogue negotiations. If you’re selling to Europe, particularly in hiring, credit, or biometrics, your EU deadline is 2 December 2027, assuming your system is classified as high-risk. That’s roughly 20 months away. The AI Act’s definition of high-risk includes most automated decision-making that affects people’s rights or opportunities.
The deadline pressure is real. For foundational model providers (Claude, GPT, etc.), compliance is due by August 2028. For small teams using AI to build customer-facing products, the December 2027 deadline is the wall you’re approaching. The EU enforcement regime comes into force in August 2026, meaning the EU AI Office will have the power to issue fines starting then (up to 3 percent of global annual turnover or 15 million euros, whichever is higher).
So what?
If you’re building AI products: If you sell to the EU and your product makes decisions about hiring, credit, insurance, benefits, or other consequential outcomes, you have 20 months to comply with the AI Act. That means now’s the time to: classify your system (is it actually high-risk under the EU definition?), document your training data, design your transparency features, and plan your conformity assessment. If you’re still unsure whether your system counts as high-risk, you’re behind. Start the classification question immediately.
If you’re using AI in your business: If you use AI in the EU for hiring, lending, or decisions affecting people, assume you’re subject to the AI Act as a user, not just a provider. You may need to ensure the systems you buy from are compliant. The liability question (who’s responsible if the system fails?) is still being negotiated in trilogue, but you’re likely on the hook in some form. Track the trilogue outcome around user liability (decisions expected by late April).
If you’re advising AI companies: Tell clients with EU exposure that the 2 December 2027 deadline for high-risk systems is now legally locked in across EU institutions. It’s the working assumption for negotiations. Clients should assume this deadline will hold and plan for it. If they sell HR, credit, or insurance products to the EU, they have 20 months and should start conformity assessment planning now.
Who feels this most
HR / Hiring: Your EU deadline is 2 December 2027. Hiring tools are explicitly high-risk under the AI Act. Start documentation, fairness testing, and impact assessment immediately. The EU Office’s enforcement powers start in August 2026, so by then you should have a credible compliance plan for hitting the December 2027 deadline.
Financial Services: Credit scoring, loan approval, and fraud detection are high-risk. Same 2 December 2027 deadline. This is a hard deadline for any bank or fintech selling to the EU.
Healthcare: Healthcare AI is not uniformly high-risk, but diagnostic and treatment recommendation systems are. If you sell to EU hospitals or clinics, classify your system now. If it is high-risk, you have 20 months to comply.
🇬🇧 UK | Copyright and AI policy remains frozen, but the door is cracking open
Status: 🟡 Watch - The UK government backed away from its own opt-out proposal, which means copyright and AI in the UK is now a live policy question with no clear answer
What is happening
On 19 March 2026, the UK Department for Science, Innovation and Technology (DSIT) published its response to a December 2024 consultation on copyright and artificial intelligence. The government had previously favored an opt-out system: allow AI companies to train on copyright works, but let rights holders opt out. This proposal was overwhelmingly rejected by the creative industries (film, music, publishing, and visual arts).
The government’s March response confirms it’s abandoning the opt-out option and has no new preferred policy to replace it. Instead, the UK is adopting a “wait and see” approach, monitoring ongoing litigation (Getty Images v Stability AI in the US, similar cases in the UK), international developments, and market dynamics before committing to any reform. The government states it intends to legislate on AI and copyright eventually, but has no timeline.
Why founders should care
The UK copyright question is now genuinely open-ended. The government has rejected its own best idea and is waiting for court cases and market forces to resolve the issue. If you’re training large language models or image generators on copyright material, you’re technically violating UK copyright law (there’s no AI-specific exception for training).
The government isn’t enforcing this aggressively right now, but that could change if courts in the US (Getty case) or the EU (where similar litigation is ongoing) rule against AI companies. For now, you have a grace period, but it’s not a long one. By late 2026 or 2027, the government is likely to revisit this question, possibly with a different answer based on what courts decide. If you’re UK-based and training on copyright material, assume you’re taking a risk that may be solved by policy, by court, or by market consolidation.
So what?
If you’re building AI products: If you train generative models on copyright material and serve UK customers, you’re technically not covered by a copyright exception for AI training (unlike some EU countries, the UK doesn’t have an AI-specific TDM exception). You’re in a legal gray zone. The government isn’t enforcing it now, but courts may clarify the issue by late 2026. If you’re based in the UK, monitor the Getty case outcome and any UK copyright litigation closely. If you’re based elsewhere and selling to the UK, you have more breathing room, but don’t assume it’s permanent.
If you’re using AI in your business: If you use generative AI trained on copyright material (almost all large models do this), understand that the UK government’s position is “we don’t know yet.” This is lower risk than the EU AI Act, but it’s not zero risk. No action needed now, but track the Getty case and any UK copyright decisions closely.
If you’re advising AI companies: Tell clients with UK copyright exposure that the government has paused reform and is waiting for courts and international developments to clarify the issue. This buys time, but it isn’t a permanent solution. Copyright enforcement risk in the UK is lower than in the EU (which is actively negotiating copyright protections in the Digital Omnibus) but higher than in the US (which has broad fair use).
Who feels this most
Media & Publishing: If you’re training AI models on published works (books, articles, images), the UK government’s wait-and-see approach means you’re in a policy limbo that could swing either way. The EU is negotiating stronger copyright protections for creators; the UK is waiting. If you’re a rights holder, the UK is currently less protective than the EU. If you’re a model builder, the UK is currently less restrictive than the EU, but that could flip.
🇬🇧 UK | Children’s online safety regulations now include AI obligations
Status: 🟡 Watch - A new consultation landed on 2 March with tough new rules for AI chatbots and generative AI services, open for comment until 26 May
What is happening
On 2 March 2026, the UK Department for Science, Innovation and Technology (DSIT) launched “Growing up in the online world: a national conversation,” a consultation on online child safety. The consultation includes explicit obligations for AI chatbots and generative AI services. The proposed measures include: stronger age assurance mechanisms, a potential statutory minimum age for social media, raising the UK’s age of digital consent (currently 13), restrictions on features like livestreaming and disappearing messages, and new obligations for AI chatbots and generative AI to protect children.
On 12 March, the UK’s Information Commissioner’s Office (ICO) and the communications regulator Ofcom issued coordinated letters to social media and video platforms demanding compliance action beyond self-declaration. The consultation closes on 26 May 2026.
Why founders should care
If you’re building a chat product, a generative AI tool accessible to children, or any service that might attract users under 16, this consultation is directly about you. The UK is moving toward age-gating and content safety obligations for AI. This isn’t final regulation yet (consultation closes 26 May), but the direction is clear: AI companies serving the UK will need to build age verification and child safety features. The consultation is broad, which means the final rules could be narrower or broader depending on feedback. For now, treat this as a signal: UK child safety regulation for AI is coming, probably within 12 to 18 months after consultation closes.
So what?
If you’re building AI products: If your AI product is accessible to anyone under 16 in the UK, you need to think now about age verification (how do you know who’s using it?) and age-appropriate safeguards (what does your product do if a child accesses it?). The consultation is open until 26 May. If you have UK customers or users, consider responding to the consultation to shape the final rules. Implementation timeline is likely 2027 at the earliest.
If you’re using AI in your business: If you use AI chatbots or generative AI services in the UK, track this consultation. If the final rules require age verification or content filtering, you’ll need to ensure your AI provider complies. For most small teams, this is a low priority right now, but watch for the final guidance in Q3 2026.
If you’re advising AI companies: Tell UK-based clients that child safety regulation for AI is coming, and that consultation feedback (until 26 May) is your window to shape it. If your product serves children or could serve children, respond to the consultation. Final rules likely land in late 2026 or early 2027.
Who feels this most
EdTech: If you build educational AI products used by students under 16 in the UK, this consultation is directly about you. Age verification and content safety will be mandatory. Start thinking now about how to implement age checks and how to protect minors from inappropriate outputs.
Probably noise
Developments unlikely to affect founders in the next 12 months, and why
🇪🇺 EU | GPAI Code of Practice finalized: The EU AI Office published the final GPAI (general-purpose AI) Code of Practice on 5 March 2026. This is soft law, not binding, and sets out voluntary best practices for AI model providers on transparency, copyright, and safety testing. It’s not an enforcement mechanism for small teams right now. If you’re a model provider (building Claude-like systems), pay attention. If you’re using AI in your business or building customer products, this is background noise for now.
🇬🇧 UK | AI Growth Lab sandboxes delayed: The UK DSIT opened consultation on its AI Growth Lab (regulatory sandboxes for testing AI under modified rules) in October 2025, with responses due 2 January 2026. No new updates in March. This remains a policy proposal with no confirmed timeline for implementation. It is interesting but not actionable yet. Revisit if the government announces a launch date.
The pattern this week
Three big moves converged this week, and they point in the same direction: regulation is shifting from legislative uncertainty to enforcement action. The US White House released its preemption framework, and the FTC published its Section 5 playbook on the same day (20 and 11 March, respectively). The EU locked in AI Act deadlines across institutions at the same moment trilogue negotiations began. The UK restarted conversations on copyright and child safety. None of these are final rules, but all of them are commitments. The US preemption fight is the wildcard. If the Administration wins in court, founders get a simpler compliance landscape. If it loses, the state-by-state mess continues. Either way, the next 12 months are when that gets decided, and your roadmap should account for both possibilities.
The EU deadlines are firm now (2 December 2027 for high-risk systems, 2 August 2028 for general-purpose AI). If you sell to Europe, those aren’t negotiable dates. The FTC’s Section 5 policy gives the agency explicit enforcement authority over AI practices. Expect enforcement actions by mid-2026. The UK is moving more slowly but in the same direction: child safety, copyright clarity, and AI-specific rules are all coming, likely in 2027.
The common thread is that regulators are no longer waiting for Congress or perfect legislative solutions. They’re using existing authority (FTC Section 5, EU AI Act, UK ICO powers, UK Ofcom powers) to enforce AI practices now, and using the next 12 to 24 months to build the case for more binding rules. For founders, this means: document your practices, test for discrimination, label generated content, and assume that ambiguity will be resolved against you, not for you. Regulation is moving from “should we?” to “how are you complying?”
Sector getting the most heat this week
Sector: HR / Hiring
You’re in the regulatory crosshairs across all three jurisdictions right now. In the US, New York City’s bias audit requirements are live, Illinois allows candidates to sue directly for discrimination via AI, and Colorado’s impact assessment deadline is June 2026. The FTC’s Section 5 policy statement explicitly flags automated employment decisions as a focus area. In the EU, hiring tools are classified as high-risk under the AI Act, and your compliance deadline is 2 December 2027.
In the UK, the children’s safety consultation includes child-safe content from AI systems, which affects educational hiring tools. If you build, sell, or use AI for hiring, your 2026 is now allocated to compliance work.
New York City requires bias audits and advance notice. Colorado requires impact assessments. The EU requires high-risk classification, training data documentation, and conformity assessment. Start with whatever deadline is closest to you (Colorado is June 2026, NYC is already live). If you sell hiring tools to multiple states or the EU, pick the hardest standard you operate in and build for that. You’ll be compliant everywhere else.
Dates to put in your calendar
11 March 2026 | USA | FTC Policy Statement on AI and Section 5 published. The FTC is now actively enforcing AI practices under existing authority. Review your disclosure practices, data sourcing, and any safety claims you make.
19 March 2026 | UK | DSIT published response to copyright and AI consultation. The government is adopting a wait-and-see approach. If you train models on copyright material, expect continued uncertainty, but no new enforcement action yet.
20 March 2026 | USA | White House releases National Policy Framework for AI with broad preemption proposal and executive order directing DOJ to challenge state laws. The federal-state legal battle is now formally on. Watch for litigation updates.
26 March 2026 | EU | European Parliament confirms position on Digital Omnibus. AI Act compliance deadlines are now locked in: 2 December 2027 for high-risk systems, 2 August 2028 for general-purpose AI.
26 May 2026 | UK | Consultation deadline for “Growing up in the online world,” including AI chatbot and generative AI obligations. Submit feedback if your product affects children.
June 2026 | USA | Colorado AI Act impact assessment requirement deadline for high-risk systems (moved from January). If you operate in Colorado, this is your next hard deadline.
August 2026 | EU | EU AI Office enforcement powers activate. The office can now issue fines up to 3% of global annual turnover or 15 million euros.
28 April 2026 | EU | Next Digital Omnibus trilogue session. Watch for movement on copyright protections, user liability, and AI-generated content marking timelines.
2 December 2027 | EU | Compliance deadline for high-risk AI systems (Annex III) under the AI Act. If you sell hiring, credit, insurance, biometrics, or consequential decision-making tools to the EU, this is your deadline.
Sources:
White House National AI Policy Framework Calls for Preempting State Laws, Protecting Children
President Trump Signs Executive Order Challenging State AI Laws
The FTC Just Dropped Its AI Enforcement Playbook — And AI Agents Are in the Crosshairs
March 2026: Federal Deadlines That Will Reshape the AI Regulatory Landscape
EU Digital Omnibus on AI: Council and Parliament Align Mandates as Trilogue Negotiations Begin
HRDef: AI in Hiring: Emerging Legal Developments and Compliance Guidance for 2026
