Act Now Brief | Monday 23 March 2026
For founders, operators, and creators who are using AI | USA · UK · EU
🇺🇸 USA | FTC just defined what it will enforce on your AI systems
The Federal Trade Commission published its Policy Statement on AI and Section 5 of the FTC Act on March 11. This isn’t a new rule. It’s the FTC saying how it will use existing law to prosecute AI companies. The statement covers algorithmic discrimination, deceptive AI-generated content, misleading claims about what your AI can do, undisclosed use of AI in marketing, and privacy violations tied to data collection for training.
Section 5 of the FTC Act already prohibits unfair or deceptive practices. The FTC is now clarifying that this applies to AI. If your system discriminates, misleads, or makes false claims about its capabilities, the FTC will treat it the same way it would treat a deceptive advertisement or discriminatory hiring practice. There are no safe harbors for “just being AI.”
The policy also signals that the FTC will challenge state AI laws that require changes to your AI’s outputs, saying those requirements are themselves deceptive. This creates a collision risk between state and federal law. Build your product for compliance now, not after enforcement.
So what?
If you’re building AI products: Document what your system does and cannot do. Don’t claim capabilities it doesn’t have. Don’t hide that AI is being used. Test for discrimination. The FTC will look at these things.
If you’re using AI in your business: If you use AI for hiring, lending, credit decisions, or customer decisions, document it and monitor it for bias. The FTC treats AI discriminators the same as human discriminators.
If you’re advising AI companies: Tell clients this is immediate. Not a future problem. The FTC policy is live now. Audit for the three key areas: false claims, hidden use, and algorithmic discrimination.
Who feels this most:
Hiring and HR: If you make or sell AI screening tools, hiring platforms, or scoring systems, the FTC is already building evidence in this space. Your documentation needs to exist before enforcement, not after.
Content generation and marketing: If your AI generates images, video, testimonials, or endorsements, disclosure of AI use isn’t optional. The FTC treats synthetic media the same as human-created content.
🇪🇺 EU | The August 2 deadline is now five months away. Readiness is uneven.
The EU AI Act rules for high-risk systems take effect on August 2, 2026. This isn’t negotiable. Conformity assessments, technical documentation, CE marking, and registration with the EU database must be done by that date for any high-risk system affecting an EU resident.
On March 18, the European Parliament committees adopted a joint position on the “Digital Omnibus” package, which would delay high-risk AI obligations by up to 16 months, conditional on harmonized standards. The Parliament votes on March 26. This may pass. Assume it won’t. Build for August 2.
The problem: Only 8 of 27 EU member states have designated their AI Act enforcement authorities. Many member states aren’t ready. This suggests enforcement will be chaotic and inconsistent at first, but enforcement will happen. If you’re shipping AI into the EU, you need to be compliant by August 2. Non-compliance exposes you to fines up to €15 million or 3 percent of global turnover, whichever is higher.
So what?
If you’re building AI products: If your system qualifies as high-risk under Annex III, you need conformity assessment, technical documentation, and CE marking before August 2. If you don’t have this work 40 percent done, you’re behind.
If you’re using AI in your business: If you deploy high-risk AI in the EU (hiring, lending, healthcare, criminal justice), you’re the deployer. You must conduct impact assessments, document risks, and give consumers notice and opt-outs. Start now.
If you’re advising AI companies: Tell clients the Digital Omnibus vote happens March 26. If it passes, tell them. If it doesn’t, tell them August 2 is real. Either way, they need a compliance timeline by the end of March.
🇬🇧 UK | The ICO is building enforcement cases. Your process documentation is the evidence.
The Information Commissioner’s Office published its AI and Biometrics Strategy update in March 2026. Key sentence: “Throughout 2026 the ICO will actively monitor advancements and work with AI developers and deployers to ensure they are clear on what the law requires.”
Translation: The ICO is gathering evidence now. It’s engaging with 11 major AI foundation model developers to understand their data protection practices. It’s building a picture of who is compliant and who isn’t. When enforcement comes, the ICO will use what it finds in these conversations against you.
The ICO is also focused on deepfakes, biometric systems, and automated decision-making (hiring, benefits, lending). If you use AI in these spaces, the ICO is watching.
Documentation matters. The ICO will ask to see your data processing records, your testing for bias, your risk assessments, and your transparency disclosures. If these documents don’t exist when asked, that’s an enforcement problem on top of the compliance problem.
So what?
If you’re building AI products: If you’re a UK AI company or building for UK use, the ICO is looking at your data practices and documentation. Have it ready. Test your systems for bias and document the results.
If you’re using AI in your business: If you use AI for hiring, benefits decisions, or automated assessment, the ICO expects you to have documentation showing you know the risks and have mitigated them. Run that audit now.
If you’re advising AI companies: Tell clients the ICO engagement with foundation models isn’t friendly advice. It’s the building of an enforcement picture. Get documentation in place before the ICO knocks.
🟡 Heads up
🇺🇸 USA | White House AI framework signals federal preemption push
The White House published its National Policy Framework for Artificial Intelligence on March 20. The framework proposes federal legislation to unify AI policy and explicitly preempt state laws that impose compliance costs the federal government deems excessive.
This matters because states (California, Colorado, Texas, New York, and Illinois) have AI laws that are already live. The Trump administration is signaling that it will use federal law to override them where it can. The FTC’s Section 5 policy statement is part of this strategy. The Department of Justice launched a litigation task force to challenge state AI laws on constitutional grounds.
For small AI companies, this creates uncertainty. Do you build for state laws or federal law first? Build for the strictest rule you will face. Build for the states, and you will be compliant if federal preemption wins. The reverse isn’t true.
California’s AB 1883 (workplace surveillance AI) passed out of committee on March 19. Multiple state hiring bills are advancing. The patchwork is getting worse, not better.
So what?
If you’re building AI products: If you sell into multiple US states, compliance cost is going up, not down. Build for California and Colorado standards unless the litigation task force wins. Plan for litigation. It’s coming.
If you’re using AI in your business: If you use AI for hiring or surveillance in the US, assume state laws will apply. Audit your tools against the strictest state standard in your market.
If you’re advising AI companies: Warn clients that federal preemption is being pursued through litigation and legislation. It will take years to resolve. Design products for state compliance now.
🇪🇺 EU | Code of Practice on AI-generated content reaches second draft
The European Commission published the second draft Code of Practice on marking and labelling of AI-generated content on March 5. This isn’t the law yet. It’s best practice guidance that will likely become law.
The code addresses the labelling of images, video, text, and audio generated by AI. If you build tools that generate or manipulate synthetic media, this is moving toward you. The code will likely require disclosure, labelling, or both. Assume labelling will be mandatory by the end of 2026.
So what?
If you’re building AI products: If you build image, video, or audio generation tools, plan for mandatory labelling or disclosure of AI-generated content. Check the draft code. Begin thinking about how this works at scale.
If you’re using AI in your business: If you use generative AI to create marketing content, product images, or internal materials, prepare for the possibility that external use will require labelling.
If you’re advising AI companies: Tell clients the labelling code is coming. Early adoption of labelling builds credibility with regulators and reduces friction when rules arrive.
🟢 On the radar
California AB 1883 advancing: Requires disclosure and bias testing of workplace surveillance AI. Passed committee on March 19. Heading to vote. If you build HR or employee monitoring tools, your documentation is soon going to matter more.
Arizona HB 2311 chatbot bill: Requires labels on chatbots and disclosure of limitations. Set for March 23 hearing. Small impact for now, but Arizona isn’t the last state to move this direction.
Tennessee and Vermont mental health AI bills: Both states advanced legislation prohibiting AI systems from claiming to be qualified mental health professionals. The Tennessee House passed on March 16, Vermont House passed on March 18. Narrow but growing precedent.
UK Data Use and Access Act report on AI and copyright: The UK government published its Copyright and AI Impact Assessment on March 18. Conclusion: wait and see. No immediate reform of copyright law for text-and-data mining. This means uncertainty continues for training data sourcing in the UK. Don’t assume reform is coming soon.
CEO and board liability rising: Insurers now require risk modelling and board certification of AI governance before coverage. If you have AI in your business and no governance structure, you have an insurance problem. This isn’t enforcement yet, but it’s a leading indicator.
The one thing to do this week
Read the FTC Policy Statement on AI and Section 5 of the FTC Act (published March 11, 2026). It’s short. It’s readable. It’s the enforcement policy, not the background. Highlight the sections on deceptive claims, undisclosed use, and discrimination. Then audit your product or your use of AI against those three things. If you can’t document that you have done this audit, you’re exposed. Do it this week.
Deadline tracker
🇪🇺 EU | High-risk AI systems must be compliant with EU AI Act rules (conformity assessment, CE marking, registration) | August 2, 2026 | Five months away. Only 8 of 27 member states are ready. Assume the deadline holds.
🇪🇺 EU | European Parliament votes on Digital Omnibus (proposed extension of high-risk AI deadline) | March 26, 2026 | One week away. Likely to pass but uncertain. Don’t count on a delay.
🇺🇸 USA | Colorado AI Act enforcement begins | June 30, 2026 | Three months away. Applies to high-risk systems affecting employment, housing, credit, health, and education.
🇺🇸 USA | California AB 853 (delayed AI transparency rules) now due | August 2, 2026 | Delayed from January. Requires training data summary for AI systems trained on 1M+ data points.
🇬🇧 UK | ICO finishes engagement with 11 foundation model developers | TBD 2026 | Findings will shape enforcement priorities. No set deadline yet.
🇬🇧 UK | UK government to develop statutory ADM Code of Practice | TBD 2026 | Secondary legislation required. Will provide practical guidance on automated decision-making, but timing is uncertain.
Sources:
White House National Policy Framework for Artificial Intelligence
EU Code of Practice on Marking and Labelling of AI-generated Content Second Draft
AI Enforcement Accelerates as Federal Policy Stalls and States Step In
UK Government Report on Copyright and Artificial Intelligence

