AI Deadline | Thursday 23 April 2026
Things moving through AI regulatory pipelines that will matter in the next 3 to 6 months
đ´ Act now
The Digital Omnibus trilogue meets this Monday, April 28. A political deal is possible. If Parliament and Council reach agreement, it settles two things most EU-facing founders still havenât locked into their planning.
The first is the watermarking deadline. The current proposal puts AI-generated content labelling obligations at November 2, 2026 - six months away. If thatâs confirmed Monday, you have a firm implementation target for audio, image, video and text outputs served to EU audiences. The alternative outcome is February 2, 2027, if the Council pushes for more runway.
The second is the formal delay of Annex III high-risk AI obligations. Both Parliament and Council are aligned on December 2, 2027 for standalone high-risk systems (hiring, credit, biometrics, education) and August 2, 2028 for AI embedded in regulated products. A deal Monday makes those dates binding, replacing August 2, 2026 as your planning target for high-risk compliance.
If a deal is reached, the sequence runs: agreement April 28, Parliament endorsement May, Council June, publication in the Official Journal July, enforcement August or November. On watermarking, thatâs roughly five months between political agreement and obligation start.
Watch Mondayâs outcome. If a deal is announced, your immediate action is to map which of your outputs are in scope and identify a technical watermarking approach. Machine-readable labelling for audio and video requires actual infrastructure, not just a visible badge. Five months is not much time if youâre generating content at volume.
đĄ Heads up
USA | xAI sues to block Coloradoâs AI Act before June 30
On April 10, Elon Muskâs xAI filed a federal lawsuit against Colorado seeking to enjoin its AI Act before the June 30 effective date. The constitutional claims: First Amendment (developing an AI model is âexpressiveâ), Dormant Commerce Clause (Colorado canât regulate companies with no presence there) and due process (terms like âalgorithmic discriminationâ are too vague to enforce). A hearing on early injunctive relief is expected before June 30. The lawsuit doesnât excuse compliance. Assume the deadline holds until a court says otherwise. Do the portable work now - risk assessments and bias documentation - and hold off on Colorado-specific architecture until the lawâs survival is clear.
USA | Bartz v. Anthropic fairness hearing - May 14
The $1.5 billion Anthropic copyright settlement gets final court approval on May 14 in San Francisco. The claim rate is extraordinary: 91.3% of eligible works were claimed, against a typical class action rate of around 10%. If Judge MartĂnez-OlguĂn approves, roughly $2,931 per book becomes the commercial floor for training on copyrighted text without consent. That number will appear in every training data licensing negotiation for years. If you train models on third-party text and havenât looked at your data sourcing, three weeks is enough time to start.
In focus
EU | What GPAI enforcement actually means for small AI companies - and whether youâre in scope
August 2 has appeared in every brief since March. Here is what it specifically requires.
âGeneral-purpose AI model providerâ sounds like it means OpenAI or Google. It doesnât, entirely. Under the AI Act, a GPAI provider is any company that makes a model available to third parties - via API, download, or integration - for use in other applications. If you offer a fine-tuned model, an API endpoint, or a model-as-a-service product to EU customers, youâre a GPAI provider. Size isnât the threshold. Deployment intent is.
From August 2, the EU AI Office gains full enforcement powers. It can request documentation at any time, impose mitigations, order model recalls and fine up to âŹ15 million or 3% of global annual revenue, whichever is higher. The informal, collaborative approach that has characterized the AI Officeâs first year ends. This is enforcement.
What you need to have ready: a technical documentation pack covering architecture, training methodology, capabilities and limitations, and testing results; instructions for downstream use; and a summary of the content used for training - specifically, your copyright compliance approach. The AI Office has a template for the training data summary. It isnât published yet, but the category is clear: you need to be able to state what you trained on and how you handled copyright.
If youâve signed the GPAI Code of Practice, youâre in a better compliance position by default. The Code offers a presumption of conformity. If you havenât signed, you need to demonstrate compliance through alternative means and get that approach approved by the Commission. For smaller providers, signing the Code is the easier path.
Systemic-risk models carry additional obligations - adversarial testing, incident reporting, cybersecurity measures. The threshold is roughly 10²⾠FLOPs of compute used for training. Most models below frontier scale donât hit this. But if youâve trained at significant compute or distribute widely, check the threshold.
If you expose any model to EU users via API, use the next 100 days to build your documentation pack. Start with the technical documentation and the training data summary. Those are the two things the AI Office will ask for first, and they take longer to assemble than youâd expect.
đ˘ On the radar
đŹđ§ UK | ICO automated hiring consultation closes May 29. 36 days. If you build or use hiring AI in the UK, this is your window to influence the final guidance. It lands Q3 2026 and becomes enforceable. Respond now or accept whatever comes out.
đŹđ§ UK | Childrenâs online safety consultation closes May 26. 33 days. AI chatbots and generative tools accessible to under-16s in the UK are in scope. Final obligations likely include age verification and content safety requirements, implementing 2027 at earliest.
đŹđ§ UK | FCA AI Live Testing second cohort now live. The application window closed March 24. A second cohort joins the FCAâs controlled testing environment this month, gaining access to synthetic financial data. Watch for cohort three if you missed this round and build financial AI.
đşđ¸ USA | FTC Take It Down Act enforcement is active. The FTC has named AI-generated sexual deepfakes as a top enforcement priority. Commissioner Meador has said itâs being treated as urgent. If your product generates or hosts synthetic intimate content, this isnât hypothetical.
đşđ¸ USA | Texas TRAIGA in force. Effective January 1, 2026. Uncurable violations run $80,000 to $200,000 per incident. Exists regardless of what happens to Colorado or any other state law. If you deploy high-risk AI for Texas users or employees, your obligations are live now.
The one thing to do this week
Watch Mondayâs Digital Omnibus outcome. If a deal is announced, spend an hour mapping which of your outputs are in scope for watermarking and identifying a technical approach. Six months moves faster than it looks.
