Act Now Brief | Monday 13 April 2026
Enforcement actions, live deadlines, and things that genuinely require action this week
Two items to flag at the top. The FCA chatbot enforcement risk, first covered in the 30 March edition, didn’t carry through to the 6 April brief. It’s back here because the action is still live and the 6 April edition effectively dropped a ball that needs picking up. Second, the Digital Omnibus status has moved on since the 6 April brief: the picture is more nuanced than “it didn’t pass, August 2 holds.”
🔴 Act now
Enforcement actions, live deadlines, and things that genuinely require action this week
🇬🇧 UK | FCA AI chatbot enforcement risk: still live
First covered in the 30 March edition. No material new development, but the 6 April brief didn’t carry this forward and it still requires action.
On 26 March, the Financial Conduct Authority (FCA) published its latest perimeter report, explicitly naming AI-powered personal finance tools and chatbots as a fast-growing area of unregulated activity. The FCA’s perimeter reports signal where formal enforcement attention is heading. An AI tool positioned as “guidance” that ends up recommending a product, summarising pension exit fees, or suggesting a fund has crossed into regulated advice under the Financial Services and Markets Act 2000 (FSMA). Consumer Duty adds a second layer: if your AI produces a hallucinated rate of return and a customer acts on it, you’re exposed regardless of your terms of service. The FCA has said that unsupervised generative AI should not be used for substantive financial communications.
If you acted on this after the 30 March edition, you’re done. If you haven’t, this week is the week.
So what?
If you’re building AI products: Map your product’s outputs against the FSMA regulated/unregulated advice line. If you can’t clearly say why a given output is guidance and not advice, that gap is your compliance risk.
If you’re using AI in your business: Document how you’re supervising any AI tool that produces output a customer might construe as financial guidance. “We didn’t know” is not a Consumer Duty defence.
If you’re advising AI companies: Tell fintech clients the March 26 perimeter report is their signal to get their FSMA and Consumer Duty mapping done before the FCA launches a thematic review.
Who feels this most:
Fintech and wealthtech: You’re the named category. The burden of demonstrating you’re on the right side of the regulated/unregulated line is yours.
HR and benefits tools: If your AI helps employees understand pension or salary sacrifice options, you may be closer to the advice boundary than you think.
🟡 Heads up
Developments that are not urgent today but could require action within the next two weeks
🇪🇺 EU | Digital Omnibus update: more nuanced than “it didn’t pass”
The 30 March and 6 April editions both reported the Digital Omnibus vote as having failed on March 26, with August 2 holding firm. That was accurate at the time, but the legislative picture has since moved.
On 18 March, the IMCO and LIBE committees in the European Parliament adopted their joint report on the Omnibus AI package. The proposal would delay Annex III high-risk AI obligations by up to 16 months, pushing the enforcement date to as late as December 2027 or August 2028. The mechanism is conditional: the delay takes effect only once the European Commission confirms that the harmonised technical standards needed for compliance are available (two EU standardisation bodies missed their 2025 deadline and are now targeting the end of 2026). Once confirmed, the deadline shifts to six months later for Annex III systems and twelve months for Annex I.
The critical detail: for the delay to become law before August 2, a final political agreement must be reached by around June. Negotiations are live. If an agreement lands in time, August 2 moves. If not, it stays.
The planning position is unchanged from previous editions: build for August 2 and treat a confirmed delay as a bonus. But now you have a clearer picture of when you’ll know for sure.
So what?
If you’re building AI products: Do not pause documentation or conformity assessment work on the assumption that a delay is coming. June is the checkpoint.
If you’re using AI in your business: If you deploy AI in EU Annex III categories (hiring, credit, biometrics, education), keep your risk documentation and impact assessments going. Don’t wait.
If you’re advising AI companies: Tell clients the June window is the one to watch. A deal before June means the delay takes legal effect. After June, August 2 stands. Build backwards from August 2 with June as the review point.
🇺🇸 USA | DOJ AI Litigation Task Force first case expected any day
The Department of Justice (DOJ) AI Litigation Task Force was established in January 2026, directed to challenge state AI laws on grounds of unconstitutional regulation of interstate commerce and federal preemption. The Department of Commerce’s evaluation of state AI laws was due on 11 March. Legal analysts had been expecting the first case “by spring.” That window is now.
Colorado’s AI Act (enforcing June 30, 2026) is widely seen as the most likely first target. If the DOJ files and wins an early injunction, the US state compliance picture shifts almost immediately. Colorado’s June 30 deadline could freeze before it lands.
This doesn’t mean stop compliance work. It means don’t over-invest in state-specific architecture before you know what gets filed.
So what?
If you’re building AI products: Keep your US state compliance approach modular. Don’t lock in a Colorado-specific architecture this week.
If you’re using AI in your business: Texas TRAIGA (the Texas Responsible Artificial Intelligence Governance Act, in force since 1 January 2026) is not a target of this litigation. If you have Texas users or employees and deploy high-risk AI, your obligations there are live regardless of what happens with Colorado.
If you’re advising AI companies: Your clients need to know that a DOJ filing with an injunction would directly affect whether Colorado compliance is required by June 30. Watch for it.
Who feels this most:
HR and hiring tools: Colorado’s Act specifically targets algorithmic discrimination in employment decisions. If the DOJ enjoins it, that compliance deadline may freeze.
🟢 On the radar
Worth knowing about, no action needed yet
🇺🇸 Colorado AI Act: 11 weeks to June 30. No change from the 6 April edition. High-risk AI in employment, housing, credit, and healthcare must comply by 30 June. The DOJ case could change this. Assume it won’t until you hear otherwise.
🇪🇺 EU AI content labelling: August 2, 2026. Second draft of the Code of Practice on marking and labelling of AI-generated content is published, final version due in June. From 2 August, audio, image, video, and text outputs from generative AI must carry machine-readable markings. This obligation is not affected by the Digital Omnibus delay proposal.
🇪🇺 GPAI enforcement activates August 2, 2026. General-Purpose AI (GPAI) model obligations have been in force since August 2025. Enforcement powers will fully activate in four months. Fines reach up to 15 million euros or 3% of global revenue. The collaborative window with the EU AI Office is closing.
🇺🇸 Texas TRAIGA in force with real penalties. In force since 1 January 2026. Uncurable violations: $80,000 to $200,000 per incident, up to $40,000 per day ongoing. Exists regardless of what the DOJ does to other state laws.
🇬🇧 ICO automated decision-making guidance is coming. The Information Commissioner’s Office (ICO) is preparing a consultation draft on automated decision-making and profiling. No date confirmed. When it arrives, it will set the practical enforcement lines for AI in hiring, benefits, and credit in the UK.
🇬🇧 FCA Mills Review due summer 2026. The independent review into AI in financial services is expected to report this summer. Its recommendations will determine whether the FCA moves from principles-based oversight to prescriptive rules. Relevant if you’re in advisory, credit, or fraud detection in UK financial services.
🇺🇸 FTC AI-washing enforcement is continuing. More than a dozen cases in 2025. No change since the 6 April edition. If your marketing overstates what your AI does, you’re in scope regardless of size.
The one thing to do this week
If you haven’t acted on the FCA perimeter report since it first ran in the 30 March edition, do it now. Map your product outputs against the FSMA-regulated/unregulated line and document your reasoning. That’s the document the FCA will ask for if it comes looking.
If you’re already across the FCA item, make sure your EU compliance work hasn’t slowed down on the assumption that the Digital Omnibus delay is confirmed. It isn’t yet. June is when you’ll know.
Deadline tracker
EU | High-risk AI systems (Annex III): employment, credit, education, biometrics | 2 August 2026 | Proposed delay to Dec 2027/Aug 2028 pending political agreement before June
EU | GPAI model enforcement (AI Act and General-Purpose AI Code of Practice) | 2 August 2026 | Coming up
EU | AI-generated content labelling obligations (Article 50) | 2 August 2026 | Coming up
EU | AI content labelling Code of Practice finalised | June 2026 | In drafting
EU | Digital Omnibus: window for political agreement to protect delay | Before June 2026 | In negotiation
USA | Colorado AI Act: high-risk AI in employment, credit, housing, healthcare | 30 June 2026 | 11 weeks away; possibly enjoined by DOJ
USA | Texas TRAIGA high-risk AI obligations | 1 January 2026 | Now in force
USA | DOJ AI Litigation Task Force: first case | Spring 2026 | Imminent
USA | Oregon SB 1546 and Washington HB 2225 (AI companion chatbots, private liability) | 1 January 2027 | Coming up
UK | ICO automated decision-making guidance consultation | TBD 2026 | Draft expected
UK | FCA Mills Review report | Summer 2026 | Coming up
Sources:
Digital Omnibus AI regulation proposal (European Commission)
European Parliament legislative train: Digital Omnibus on AI
Colorado AI Act and executive order disruption: King & Spalding
AI enforcement accelerates as federal policy stalls: Morgan Lewis
