What's Building | Thursday 16 April 2026
Things moving through AI regulatory pipelines that will matter in the next 3 to 6 months
🇪🇺 EU | Digital Omnibus trilogue: April 28 is the meeting that locks in your content marking deadline
Status: 🔴 Watch - Political agreement is expected next Tuesday; the outcome determines whether you have until November or February to label AI-generated content for EU audiences
What’s happening
Since last Thursday, the next Digital Omnibus trilogue session has come into sharp focus: April 28, 2026, 12 days from today. The European Parliament and Council are aligned on most major issues, with the remaining sticking point being when synthetic content marking must apply. The Parliament is pushing for November 2, 2026; the Council wants February 2, 2027. Whichever date is agreed becomes your deadline for watermarking AI-generated audio, images, and video served to EU users.
Both institutions have also agreed to add a new prohibited practice to the EU AI Act covering AI systems capable of generating realistic intimate images of identifiable real people without consent.
If a political agreement lands April 28, endorsement by Parliament and Council could follow in May and June, with publication in the Official Journal by July 2026. The European Data Protection Board (EDPB) published a joint opinion with the European Data Protection Supervisor urging co-legislators not to weaken oversight of high-risk AI systems in the amendments.
Why founders should care
The synthetic content marking deadline matters for anyone publishing AI-generated images, audio, or video to EU audiences. November versus February is a three-month difference and it determines how much runway you have to build detection and labelling infrastructure.
The Digital Omnibus will also confirm the compliance deadlines for high-risk AI systems: Annex III systems (hiring, credit, biometrics) by December 2, 2027, and Annex I systems (AI embedded in regulated products) by August 2, 2028. These are no longer proposals. They will be law within months. If you’re building for the EU, treat those dates as fixed now.
So what?
If you’re building AI products: If your product generates images, audio, or video for EU users, the content marking deadline is coming. Build to Parliament’s November 2 date as your target. If the Council’s February date wins, you’ve gained three months as a buffer.
If you’re using AI in your business: If you publish AI-generated content for EU audiences, the same marking deadline applies. Plan your labelling process now, before the final code is published in June.
If you’re advising AI companies: Tell clients to assume November 2026 for EU content marking. Building to the earlier date is the safer bet. If the Council wins the argument, clients gain time.
Who feels this most:
Media & Publishing: The EU AI Act Article 50 transparency obligations apply to AI-generated content intended for EU audiences. Your infrastructure question is labelling and detection, and the answer to whether you have until November or February will land April 28. Watch that date.
🇺🇸 USA | Bartz v. Anthropic: the $1.5 billion copyright settlement goes to final approval May 14
Status: 🟡 Watch - The largest copyright settlement in US history gets its fairness hearing in four weeks; court approval will set the pricing benchmark for training AI on books
What’s happening
In August 2025, Anthropic agreed to pay $1.5 billion to settle Bartz v. Anthropic, a class action brought by authors whose books were used without consent to train Claude. The settlement covers approximately 500,000 works drawn from datasets including LibGen. On April 8, 2026, Judge Martinez-Olguin rescheduled the settlement fairness hearing to May 14, 2026, where a federal judge will decide whether to grant final approval.
Authors Alliance has published unsealed objections from authors who believe the settlement undervalues the works. If approved, the settlement becomes the largest copyright settlement in US history and the first major precedent for what compensation looks like when an AI company trains on books without consent.
Why founders should care
A $1.5 billion settlement for 500,000 works implies roughly $3,000 per work. That number will appear in every AI training license negotiation for years, regardless of what courts eventually rule on fair use. The settlement is not a legal judgment; it’s a commercial deal. But it sets the market expectation.
This matters alongside two other cases moving in parallel: Thomson Reuters v. Ross Intelligence (training on legal headnotes found not to be fair use, now on appeal in the Third Circuit) and Getty Images v. Stability AI (heading toward trial in the US). The legal picture on training data is converging from three directions at once, and the direction is consistent.
So what?
If you’re building AI products: If you’ve trained on copyrighted books, articles, or creative works without licenses, the Bartz settlement establishes what damages look like. The May 14 hearing won’t change your exposure, but it will crystallize the cost. Start licensing conversations now.
If you’re using AI in your business: If you use AI tools built on training data of uncertain provenance, ask your vendor whether they have copyright indemnification. The Bartz precedent means this question has a dollar figure attached.
If you’re advising AI companies: Tell clients that three copyright cases are moving simultaneously. The Bartz settlement, if approved May 14, sets one data point on pricing per work. Clients with unlicensed training data should assume they face similar exposure and begin licensing conversations before court outcomes force their hand.
Who feels this most:
Media & Publishing: Three copyright cases, three different content types (books, images, legal text), all moving toward resolution this year. If you publish written content and haven’t begun AI licensing discussions, you’re losing negotiating leverage with every week that passes.
🇬🇧 UK | Competition and Markets Authority opens formal investigation into Microsoft’s AI-bundled software
Status: 🟡 Watch - The Competition and Markets Authority launched a strategic market status investigation March 31; it formally commences in May and has nine months to run
What’s happening
On March 31, 2026, the Competition and Markets Authority (CMA) announced a strategic market status (SMS) investigation into Microsoft’s business software ecosystem, covering Windows, Teams, Word, Excel, and Microsoft’s Copilot AI platform. The investigation formally commences in May 2026, and the CMA has nine months to complete it under the Digital Markets, Competition and Consumers Act 2024 (DMCCA).
The CMA’s core concern is that Microsoft’s bundling of Copilot with enterprise software could close the market to competing AI developers and lock enterprise customers into Microsoft’s AI products. As part of the same package, the CMA secured commitments from Microsoft and Amazon Web Services on cloud egress fees and interoperability. The CMA published separate guidance in March on agentic AI and consumer law, setting out risks of using AI agents without adequate human oversight.
Why founders should care
If you compete with Copilot or build AI tools for enterprise use on Microsoft platforms, this investigation is directly relevant to your market access. If the CMA concludes Microsoft has strategic market status, it can impose conduct requirements (interoperability mandates, bundling restrictions) within 12 months of a decision, meaning remedies could land by mid-2027.
For customers: if you feel locked into Microsoft AI because it’s bundled with the software you already pay for, the investigation creates the conditions for more choice by mid-2027. The CMA demonstrated real appetite for this kind of intervention in its earlier cloud services work.
So what?
If you’re building AI products: If you build AI tools competing with Copilot in productivity software and serve UK enterprise customers, the CMA investigation is working in your favor. Consider submitting evidence on how Microsoft’s bundling affects your ability to compete.
If you’re using AI in your business: Make procurement decisions on current terms rather than anticipated remedies. The investigation takes nine months. Don’t wait on it.
If you’re advising AI companies: Tell clients competing with Big Tech AI products in the UK that the CMA is receptive to market access concerns. The investigation is an opportunity to put evidence on the record now.
Who feels this most:
Enterprise SaaS: If you sell productivity AI tools into UK enterprises alongside Microsoft’s stack, the CMA’s findings in early 2027 will determine whether you have a viable competitive position. Keep building; don’t wait for the outcome.
🇬🇧 UK | FCA signals where AI financial chatbot enforcement is heading
Status: 🟡 Watch - The Financial Conduct Authority’s March 26 perimeter report names unregulated AI financial chatbots as a consumer protection gap; this is not enforcement yet, but it’s the clearest direction signal the FCA has given
What’s happening
On March 26, 2026, the Financial Conduct Authority (FCA) published its annual perimeter report, which identifies areas at the edge of its regulatory scope where legislative change may be needed. The report explicitly flags the growth of AI chatbots providing financial guidance without FCA authorisation, including general-purpose large language models used by consumers for investment, pension, and mortgage decisions.
The FCA’s current position is that consumers using these tools are not receiving regulated advice and have no access to the Financial Ombudsman Service or Financial Services Compensation Scheme protections. The report asks the UK government to consider updating regulatory boundaries if consumer harm materialises. Separately, the FCA confirmed its second cohort of firms began live AI testing in April 2026 through its AI Live Testing programme, started in partnership with technical assessor Advai.
Why founders should care
If you build a product that helps users make financial decisions, you’re operating in territory the FCA is now actively watching. The perimeter report is not an enforcement action; it is the FCA telling Parliament where it thinks new rules may be needed. But the FCA’s Mills Review, published February 2026, set out a vision for AI in retail financial services through 2030, and the combination of the two documents signals a regulatory boundary that is moving. Founders building financial guidance AI in the UK should expect a consultation on extending the FCA’s perimeter within 12 to 18 months.
So what?
If you’re building AI products: If your product could be described as giving financial guidance, get a perimeter analysis done now. The window between a perimeter report and a formal consultation is shorter than founders typically expect, and the window between a consultation and enforcement is shorter still.
If you’re using AI in your business: If you use AI tools to help customers with financial decisions, check whether your use falls within FCA-regulated activity. If you’re unsure, get advice.
If you’re advising AI companies: Tell clients in fintech and wealthtech that the FCA has named this as a priority area for possible legislative reform. If they’re building financial guidance AI without authorisation, they should plan for a regulatory consultation within the year.
Who feels this most:
Financial Services: The FCA is watching. The Mills Review set the long-term direction; the perimeter report identifies the near-term gap. If you build AI for financial guidance, advice, or decision-support in the UK, you’re in the FCA’s field of view. Not regulated today. That is likely to change.
🇪🇺 EU | General-purpose AI enforcement powers activate in 15 weeks
Status: 🔴 Watch - From August 2, 2026, the EU AI Office can request information, evaluate models, and issue fines on general-purpose AI model providers
What’s happening
August 2, 2026 is when the EU AI Office’s enforcement powers for general-purpose AI (GPAI) models come into force. From that date, the AI Office can request technical documentation, carry out independent model evaluations, and order model recalls. Fines for GPAI violations can reach 3 percent of global annual turnover, or 15 million euros for smaller companies. For GPAI models with systemic risk (defined as models trained on more than 10^25 floating-point operations), adversarial testing is mandatory.
The GPAI Code of Practice, published July 2025, is the compliance tool: signatories who implement its transparency, copyright, and safety chapters are treated as demonstrating compliance with the AI Act. The AI Office is currently reviewing whether signatories are actually implementing the code, ahead of enforcement going live.
Why founders should care
If you provide access to a general-purpose AI model to EU users, either directly or through an application programming interface (API), GPAI rules apply to you. For most small teams, the relevant obligations are in the transparency chapter: technical documentation, training data source disclosure, and a published copyright opt-out policy.
If you haven’t signed the GPAI Code of Practice, you’re relying on demonstrating compliance through alternative means. With enforcement 15 weeks away, that’s a practical question worth resolving now.
So what?
If you’re building AI products: If you provide GPAI models to EU users, check whether you’ve satisfied GPAI transparency obligations. Training data documentation and copyright opt-out policy publication are the minimum. Sign the GPAI Code of Practice if you haven’t.
If you’re using AI in your business: If you use API-based AI models to build products served to EU customers, ask your AI provider whether they are GPAI-compliant. Their compliance status affects your downstream risk.
If you’re advising AI companies: Tell clients providing GPAI models that August 2 is firm. Fifteen weeks is enough time to prepare documentation but not enough time to build from scratch. If clients haven’t started, this conversation needs to happen today.
Who feels this most:
AI infrastructure and API providers: If you offer model access via API and serve EU customers, you are a GPAI provider. Transparency documentation, training data disclosure, and a copyright opt-out policy are mandatory from August 2. The GPAI Code of Practice is the clearest path to demonstrating compliance.
🇪🇺 EU | AI content labelling code: second draft published, final rules 15 weeks away
Status: 🟡 Watch - The second draft of the EU’s Code of Practice on Marking and Labelling of AI-generated content is out; the final version is expected by June, with rules applying August 2
What’s happening
The European Commission published the second draft of the Code of Practice on Marking and Labelling of AI-generated content, incorporating feedback from hundreds of participants including industry, civil society, and researchers. Compared to the first draft, the second is more flexible and practice-oriented. It removes the distinction between AI-generated and AI-assisted content (which caused significant compliance uncertainty in the first draft) and introduces design and placement requirements for labels, icons, and disclaimers to ensure a minimum level of uniformity across platforms.
A task force will develop a future EU-wide uniform interactive icon. The code is expected to be finalized by June 2026, and the Article 50 transparency obligations it supports become applicable on August 2, 2026.
Why founders should care
If you generate images, audio, video, or text using AI and serve EU audiences, the labelling obligations apply to you. The removal of the AI-generated versus AI-assisted distinction simplifies compliance: you need to label outputs that are substantially AI-generated, without trying to categorize degrees of AI involvement. The June finalization date gives you roughly eight weeks between knowing the exact requirements and needing to comply. For any team without labelling infrastructure, that’s a short window.
So what?
If you’re building AI products: If you generate AI content for EU users, read the second draft now. The final version will be similar enough that you can begin building to it. Labelling infrastructure and detection tools need to be ready by August 2.
If you’re using AI in your business: If you publish AI-generated content for EU audiences, the same labelling rules apply. Work out your labelling process before June; you’ll have limited time between the final code and the August 2 application date.
If you’re advising AI companies: Tell clients publishing AI-generated content in the EU that June-to-August is an extremely short implementation window. Start compliance work on the second draft now.
Who feels this most:
Media & Publishing: You will be labelling AI-generated content from August 2. The second draft is available now. Read it. Build your labelling process before the final version lands in June.
Continuing stories
No material movement since last Thursday on the following. Brief notes only.
🇬🇧 UK | ICO automated hiring consultation: 43 days remain
Still no movement since last Thursday. The Information Commissioner’s Office (ICO) automated hiring consultation closes May 29. If you build or use AI for recruitment in the UK, respond now to shape the final guidance. The final rules land in Q3 2026.
🇬🇧 UK | Children’s safety consultation: 40 days remain
Still no movement since last Thursday. The Department for Science, Innovation and Technology (DSIT) consultation on children’s online safety, including AI chatbot obligations, closes May 26. If your product could be accessed by under-16s in the UK, respond.
🇺🇸 USA | Colorado AI Act deadline: 47 days away
Still no movement since last Thursday. The Colorado AI Act impact assessment requirement for high-risk systems is June 2, 2026. If you build or use AI for hiring, credit, or consequential decisions in Colorado, you’re 47 days from the deadline. Documented bias testing and human oversight procedures are the minimum.
🇬🇧 UK | Copyright: still frozen
The UK government continues its “wait and see” position on AI and copyright, as covered in previous briefs. No new movement. Not actionable now.
Probably noise
Developments unlikely to affect founders in the next 12 months, and why
🇺🇸 USA | Federal AI legislation: The 119th Congress has introduced multiple AI bills, including the AI Foundation Model Transparency Act (which would direct the Federal Trade Commission to establish training data disclosure requirements), the Protect American AI Act, and the GUARDRAILS Act (which would nullify the White House’s December 2025 AI preemption order). None have advanced out of committee. Federal AI legislation is stalled. State laws are the active compliance risk.
🇪🇺 EU | Apply AI strategy: The European Commission published a 1 billion euro initiative to accelerate AI adoption across ten strategic sectors, including healthcare, energy, and manufacturing. This is industrial policy, not regulation. It creates no compliance obligations for small teams. Worth tracking if you’re seeking EU public procurement contracts or research grants.
🇪🇺 EU | AI Office energy consumption consultation (closes May 15): A targeted consultation on measuring energy consumption and emissions of AI models. If you’re a small team using third-party models, this doesn’t affect you directly. If you train or run large models, this could eventually become mandatory energy reporting. Submit a response if you have a view on methodology.
🇺🇸 USA | Supreme Court AI authorship denial: The Supreme Court declined to hear Thaler v. Perlmutter in March 2026, confirming that AI-generated works without human authorship are not copyrightable under US law. This was consistent with settled circuit court precedent. No change to your IP approach is needed as a result.
The pattern this week
Two things are moving at once, and they’re reinforcing each other.
August 2, 2026 is crystallising as a genuine inflection point in the EU. The GPAI enforcement powers activate that day. Article 50 transparency obligations apply that day. The AI content labelling code becomes effective that day. If the Digital Omnibus political agreement lands April 28 (as expected), the synthetic content marking deadline will be locked in for that date or three months later.
For anyone building AI products or publishing AI-generated content for EU users, August 2 is now 15 weeks away. It’s not abstract. The question is whether your documentation and labelling infrastructure is ready.
Copyright is being settled commercially before the courts settle it legally. The Bartz v. Anthropic $1.5 billion settlement, heading for final court approval May 14, establishes a market rate for training on books without consent: roughly $3,000 per work for this dataset. That number will appear in AI training license negotiations for years, regardless of what courts eventually say about fair use.
The Thomson Reuters v. Ross Intelligence ruling (training on legal headnotes is not fair use) is on appeal. The Getty Images case is heading to trial. Multiple cases, multiple content types, converging on the same conclusion.
The UK regulatory posture this week is interesting. The CMA launched a major investigation into Microsoft’s AI-bundled software. The FCA named unregulated financial AI chatbots as a consumer protection gap. Neither is a law. Both reflect a UK that is using existing competition and consumer protection powers aggressively on AI market structure and consumer risk, rather than waiting for AI-specific legislation.
This approach is faster and less predictable than the EU’s rule-making process, and for founders, it means enforcement exposure can appear through existing law before new rules are written. Build your compliance posture on existing consumer and competition law, not just on future AI-specific rules.
The US picture is quieter this week on new regulatory action, but the underlying dynamic remains: state laws are advancing on their own timelines (Colorado June 2, California August 2), federal preemption is uncertain, and the DOJ AI Litigation Task Force is active but hasn’t filed its first case yet. The federal enforcement story is still in the setup phase.
Sector getting the most heat this week
Sector: Financial services and fintech
If you’re building AI for financial services in the UK or EU, you’re in the field of view of at least four regulatory bodies simultaneously, and they’re all pointing at the same problem from different directions.
The FCA named unregulated AI financial chatbots in its perimeter report. That’s a warning, not a rule. But the FCA’s Mills Review set out a vision through 2030 for how it expects to regulate AI in retail financial services, and the gap between the perimeter report and formal consultation typically runs 12 to 18 months. If your product helps users make investment, pension, mortgage, or tax decisions, get a perimeter analysis done now. Not when the consultation launches.
The CMA opened its Microsoft investigation partly because it worries about AI being bundled into enterprise software in ways that close markets to competing providers. If you sell AI tools into UK enterprises alongside Microsoft’s stack, the CMA is creating the conditions for your market to remain open. The investigation takes nine months; watch for interim findings.
In the EU, the August 2 deadline for GPAI enforcement applies if you run or embed AI models in financial products served to EU customers. Transparency documentation, training data disclosure, and a copyright opt-out policy are mandatory from that date. Check whether your AI vendor is GPAI-compliant. If they’re not, their compliance gap becomes your downstream problem.
And if you’re in the EU and offer automated lending, credit scoring, or insurance pricing decisions, you’re a high-risk AI provider under Annex III of the EU AI Act. Your compliance deadline is December 2, 2027. That’s 20 months. The EU AI Office will be in full enforcement mode by August and looking to establish precedent. Don’t be the example.
Dates to put in your calendar
28 April 2026 | EU | Digital Omnibus trilogue meeting. Political agreement expected on AI Act amendments. The outcome locks in the synthetic content marking deadline (November 2, 2026 or February 2, 2027) and confirms December 2027 as the high-risk AI compliance date.
14 May 2026 | USA | Bartz v. Anthropic settlement fairness hearing. Judge decides whether to grant final approval to the $1.5 billion copyright settlement. Approval sets the benchmark for AI training on copyrighted books.
15 May 2026 | EU | EU AI Office consultation on energy consumption measurement closes. Relevant to large model providers; not urgent for most small teams.
26 May 2026 | UK | Children’s online safety consultation closes (DSIT). Affects AI chatbots and generative AI products accessible to under-16s in the UK.
29 May 2026 | UK | ICO automated hiring consultation closes. Final guidance lands Q3 2026. Respond now if you build or use AI hiring tools in the UK.
2 June 2026 | USA | Colorado AI Act impact assessment deadline for high-risk systems. If you use or build AI for hiring, lending, or consequential decisions in Colorado, this is your deadline.
June 2026 | EU | EU AI Act content labelling code of practice expected to be finalized. Rules apply August 2, 2026.
2 August 2026 | EU | GPAI enforcement powers activate. Article 50 transparency and content marking obligations also apply from this date. California SB 942 watermarking rules are also effective.
2 December 2027 | EU | Compliance deadline for Annex III high-risk AI systems (hiring, credit, biometrics, consequential decisions). Locked in via the Digital Omnibus.
2 August 2028 | EU | Compliance deadline for Annex I systems (AI embedded in regulated products).
