What's Building | Thursday 9 April 2026
For founders, operators, and creators who are using AI | USA · UK · EU
🇺🇸 USA | Copyright infringement case against AI image generators gets clearer in court
Status: 🟡 Watch — A judge is letting Getty Images amend its lawsuit against Stability AI, signaling that copyright infringement claims against AI companies may be viable
What is happening
On April 8, 2026, a federal judge in San Francisco indicated that Getty Images can strengthen its copyright lawsuit against Stability AI. During a motion hearing, the judge suggested Getty could “beef up” its core infringement claims, though she pressed the company on whether it had adequately demonstrated Stability’s intent to facilitate copyright infringement. The underlying allegation is that Stability scraped more than 12 million Getty photographs without permission to train its image generator.
This follows the UK High Court’s November 2025 ruling in the same case, which largely rejected Getty’s claims on copyright infringement but preserved findings of fact that Getty is now using to bolster the US lawsuit.
Why founders should care
The Getty case is the first major test of whether copyright law applies to AI model training. The US judge’s comments suggest infringement claims will survive early dismissal, which means the case will go to trial. If Getty wins, it creates liability for any AI company that trained on copyrighted content without licenses or explicit consent.
The UK and US courts are still working through the law, but the direction is becoming clearer: scraping copyrighted works to train models is legally risky. If you have built models on copyright material, assume this exposure will be clarified within 12 months.
So what?
If you’re building AI products: If you’ve trained generative models on copyright material without licenses, you face legal risk. The Getty case will clarify the law by late 2026 or 2027. Consider negotiating content licensing agreements with copyright holders now, or be prepared for injunctions or damages.
If you’re using AI in your business: If you use generative AI trained on copyright material, you inherit some of this risk. This is less urgent than direct model building, but licensing disputes between model creators and copyright holders could affect your AI vendor’s stability. Ask your AI provider about their data sourcing and licensing strategy.
If you’re advising AI companies: Tell clients with copyright exposure that the Getty case is moving toward trial. Copyright licensing for model training isn’t yet settled, but founders should assume they may need to pay for rights or negotiate consent agreements with major copyright holders. This is a material risk to value and product viability.
Who feels this most
Media & Publishing: If you train AI on published works, you’re directly at risk. The Getty case will likely force a choice between licensing agreements or limited model scope. If you license models from third parties, ensure your vendor has proper copyright clearances for the training data.
🇺🇸 USA | FTC publishes its 5-year enforcement roadmap, putting AI and Big Tech front and center
Status: 🟡 Watch — The FTC’s new strategic plan confirms AI enforcement is a top priority, with focus on deceptive claims, data practices, and children’s safety
What is happening
On April 3, 2026, the Federal Trade Commission published its FY 2026-2030 Strategic Plan, setting enforcement priorities for the next five years. The plan explicitly prioritizes AI-related enforcement, with named focus areas including deceptive AI practices (AI-washing), data privacy violations tied to AI, and unfair algorithmic decision-making. The plan also signals the FTC will deploy AI and machine learning tools internally to improve its own enforcement capabilities.
This follows the agency’s March 30 settlement with OkCupid and Match Group, in which the companies agreed to a consent decree after sharing 3 million users’ photos and location data with an AI firm for training without user consent or disclosure.
Why founders should care
The FTC’s strategic plan is a commitment to the enforcement pace we have already seen. The OkCupid settlement is the template: if you collect user data and share it for AI training without explicit user notice, the FTC will come after you. The agency is also signaling it will use existing consumer protection law (not new legislation) to police AI practices.
This means the boundaries of what’s “deceptive” or “unfair” will be set by enforcement cases, not by clear rules. Expect FTC action against companies making unsubstantiated AI safety claims, misrepresenting data sourcing, and failing to disclose how user data is used for model training.
So what?
If you’re building AI products: Audit your data sourcing practices now. If you collect user data and use it for AI training, your privacy policy must disclose this explicitly. Don’t make claims about your AI system’s safety, fairness, or capabilities unless you can document them. If you can’t prove a claim, cut it. The FTC will challenge it.
If you’re using AI in your business: If you use third-party AI vendors to process customer data, ensure your contracts require them to disclose data sourcing and consent practices. The FTC is holding both data collectors and data users accountable. Don’t assume your vendor’s terms are compliant.
If you’re advising AI companies: Tell clients to review their data sourcing practices and privacy disclosures immediately. The OkCupid settlement shows the FTC will pursue companies that share data for AI training without disclosure, even if the violation happened years ago. Clients should assume the FTC will scrutinize any claim they make about fairness, safety, or AI performance.
Who feels this most
Media & Publishing: If you use user-generated content or user data to train AI systems, disclose it clearly in your privacy policy and terms of service. The OkCupid precedent is direct: undisclosed data sharing for AI training is an enforceable violation.
AdTech & Retail: If you use behavioral data to train recommendation or targeting algorithms, ensure consent is explicit and documented. The FTC is watching algorithmic discrimination and deceptive targeting practices.
🇺🇸 USA | Federal preemption of state AI laws remains uncertain, as states press ahead with enforcement
Status: 🔴 Watch — Still developing since our last brief. The preemption fight shows no resolution, and state laws are moving forward on their own timelines
What is happening
As we covered last week, the Trump administration’s push for federal preemption of state AI laws through litigation remains in the pipeline, but states are not waiting for the outcome. Colorado’s implementation deadline moved to June 2026. New York City’s bias audit requirement is in enforcement mode. California finalized new AI transparency requirements with an August 2, 2026 effective date.
The administration’s Commerce Department is preparing to challenge state laws in court on Dormant Commerce Clause grounds, but Congress has already rejected this framing. Colorado proposed amendments to clarify developer and deployer liability under its AI Act, signaling that states are actively refining their frameworks rather than pausing for federal litigation.
Why founders should care
The preemption fight isn’t freezing state laws in place. State deadlines are moving forward regardless of the federal challenge. If you operate across state lines, assume all applicable state laws are binding through 2026 and beyond. Colorado’s June deadline is now less than 60 days away. California’s August 2 effective date for watermarking and transparency (under SB 942) is four months away. Multi-state operators cannot wait for federal resolution. Build for the hardest state now.
So what?
If you’re building AI products: If you operate in more than one US state, audit which states you serve and what each state requires. Build for the state with the hardest requirements first. Colorado’s June deadline is immediate. California’s August 2 deadline is coming. Don’t assume federal preemption will save you.
If you’re using AI in your business: If you use AI in hiring, lending, or consequential decisions across state lines, you’re subject to all applicable state laws in states where your decisions affect residents. Map your exposure by state and compliance deadline. Start with Colorado and California.
If you’re advising AI companies: Tell clients that federal preemption is uncertain but state liability is certain. Help them build a state-by-state compliance map with specific deadlines and requirements. If they don’t have a Colorado roadmap by May 1, they’ll miss the June deadline.
Who feels this most
HR / Hiring: New York City’s bias audit requirement is live. Colorado’s deadline is June. California’s SB 942 watermarking rule is August. If you use AI for hiring across these states, you need compliance by the nearest deadline. Start with NYC and Colorado.
Financial Services: If you use AI for lending or credit decisions across state lines, assume state AI laws apply. Colorado’s June deadline requires impact assessments for high-risk systems. New York and California have parallel requirements. Map your exposure by June 1.
🇬🇧 UK | Copyright and AI policy remains frozen, but the door is cracking open
Status: 🟡 Watch — Still developing since our last brief. The UK government still has no preferred policy on copyright and AI, but courts on both sides of the Atlantic are starting to clarify the law
What is happening
As we covered last week, the UK government abandoned its opt-out proposal for copyright and AI in March 2026 and is now adopting a “wait and see” approach. The Getty Images case in the UK High Court (November 2025) largely rejected Getty’s copyright infringement claims, but the story is moving again with the US Getty case. The US federal judge’s April 8 comments suggest copyright claims against AI companies may survive initial dismissal, which could influence UK policy in the coming months. The government is monitoring litigation on both sides of the Atlantic.
Why founders should care
The UK copyright question is still unresolved, but courts are now providing answers. If the US Getty case succeeds, it will pressure the UK government to clarify its position. For now, you have a grace period on copyright, but it isn’t a long one. By late 2026 or early 2027, the government is likely to revisit this question based on court outcomes. The EU’s approach to copyright protection is stricter (as reflected in the Digital Omnibus trilogue), and the UK may follow suit if courts signal that copyright protection is necessary.
So what?
If you’re building AI products: If you train generative models on copyright material and serve UK or EU customers, you’re in a legal gray zone that’s narrowing. The Getty case will clarify the risk by late 2026. Consider negotiating content licensing agreements now, before court decisions force your hand.
If you’re using AI in your business: If you use generative AI trained on copyright material (most large models do), track the Getty case and any UK policy updates. No action needed now, but this could change if courts rule against AI companies.
If you’re advising AI companies: Tell clients with UK copyright exposure that courts are starting to clarify the law. The UK government’s pause isn’t a permanent solution. If the Getty case succeeds in the US, expect UK policy to shift. Budget for licensing or model retraining by late 2026.
Who feels this most
Media & Publishing: If you train AI on published works and serve UK customers, the Getty case outcome will affect your legal risk. The UK is less restrictive than the EU right now, but that advantage may not last. Start licensing content for model training or prepare to narrow your training data.
🇬🇧 UK | ICO consultation on automated hiring decisions still open until 29 May
Status: 🟡 Watch — Still developing since our last brief. The ICO’s enforcement tone is stricter than previous guidance, and the consultation window is closing
What is happening
As we covered last week, the Information Commissioner’s Office published a report on March 31, 2026, based on evidence from over 30 employers. The report finds that most employers think they’re using decision support, but evidence shows they’re using fully automated decisions with no meaningful human involvement. The ICO is consulting on draft guidance until 29 May 2026, with a focus on bias testing, transparency with candidates, and recourse rights. No new updates this week, but the consultation window is now about 50 days away from closing.
Why founders should care
This is your chance to shape UK hiring AI enforcement. The consultation closes May 29. If you build hiring tools or use AI for hiring in the UK, respond to the consultation if you want to influence the final guidance. The ICO’s tone is notably strict, and the final guidance will become your compliance floor. The key finding from the report is important: meaningful human involvement must be genuine and active, not a rubber stamp. If your hiring system has human review, document it and make it active.
So what?
If you’re building AI products: If you sell hiring tools to the UK, use the consultation window (closes 29 May) to respond and shape the final rules. The final guidance will land in Q3 2026 and will become a compliance requirement. You’ll need bias testing, transparency, and genuine human review. Start building these features now if you don’t have them.
If you’re using AI in your business: If you use AI to screen candidates in the UK, audit your system now. Do you have evidence of bias testing? Can candidates see how you screened them? Can they request meaningful human review? If the answer to any is no, fix it by May 2026.
If you’re advising AI companies: Tell UK clients that the ICO consultation (closes 29 May) is your window to shape the final rules. If your clients build hiring tools, submit feedback. The final guidance lands in Q3 2026 and will be enforceable.
Who feels this most
HR / Hiring: This is directly about you. The ICO expects meaningful human involvement, regular bias testing, and transparent communication with candidates. Document your bias testing and human review process now. The consultation closes 29 May.
🇬🇧 UK | Children’s online safety consultation open until 26 May on AI obligations
Status: 🟡 Watch — Still developing since our last brief. The consultation window is closing, and if your product serves children, this is your chance to influence the rules
What is happening
As we covered last week, the UK Department for Science, Innovation and Technology launched a consultation on March 2, 2026 called “Growing up in the online world: a national conversation.” The consultation includes explicit obligations for AI chatbots and generative AI services, with proposed measures including age assurance mechanisms, a potential statutory minimum age for social media, and new safeguards for AI products that children can access. The consultation closes on 26 May 2026. No new updates this week, but the response window is now about 47 days away from closing.
Why founders should care
If you build chat products, generative AI tools, or any service that might be accessible to children, this consultation is directly about you. The consultation is still broad enough to influence, and 47 days remain to submit feedback. The UK is moving toward age-gating and content safety obligations for AI products serving children. The final rules will likely land in late 2026 or early 2027, with implementation by 2027 at the earliest. For now, respond to the consultation if you want to shape the outcome.
So what?
If you’re building AI products: If your AI product is accessible to anyone under 16 in the UK, the consultation (closes 26 May) is your window to respond. Final guidance will likely require age verification and age-appropriate safeguards. Think now about how to implement these features. The implementation deadline is likely 2027 at the earliest.
If you’re using AI in your business: If you use AI chatbots or generative AI services in the UK, track this consultation. Final rules may require age verification or content filtering. Most small teams, this is a low priority right now. Watch for final guidance in Q3 2026.
If you’re advising AI companies: Tell UK clients that child safety regulation for AI is coming. The consultation (closes 26 May) is your window to shape it. If your product could serve children, respond now. Final rules land in late 2026 or early 2027.
Who feels this most
EdTech: If you build educational AI products used by students under 16 in the UK, this consultation is directly about you. Age verification and content safety will be mandatory. Start thinking now about how to implement age checks and protect minors from inappropriate outputs.
🇪🇺 EU | Digital Omnibus trilogue enters final stage, with next meeting set for 28 April
Status: 🔴 Watch — Still developing since our last brief. Trilogue is moving fast, and a political agreement is expected by late April or May
What is happening
As we covered last week, the European Parliament confirmed its position on March 26, 2026, and trilogue negotiations began the same day. The next trilogue session is scheduled for 28 April 2026, just 19 days from now. Political negotiations are already intense, with Parliament and Council closely aligned on several major issues, including fixed compliance deadlines: Annex III systems (hiring, credit, biometrics) by 2 December 2027, and Annex I systems (foundational models) by 2 August 2028. The Cypriot Presidency is targeting a political agreement by April or May 2026, well before the AI Act’s August 2, 2026 general application date.
Why founders should care
The AI Act compliance timeline is now locked in. There is no extension coming. If you sell high-risk AI systems to the EU (hiring, credit, insurance, biometrics, or consequential decision-making), your deadline is 2 December 2027, which is 16 months away. The trilogue is moving toward agreement, and the final rules will be published shortly after. For EU-focused founders, compliance work should start now, not after the trilogue concludes. EU member state sandboxes open August 2, 2026, which could provide liability protection if you qualify and participate. Apply for sandbox access in Q3 2026 if your system qualifies.
So what?
If you’re building AI products: If you sell to the EU and your product makes decisions about hiring, credit, or other high-risk use cases, your hard deadline is 2 December 2027. Start conformity assessment and documentation work immediately. Member state sandboxes open August 2, 2026. Check whether your member state has announced its sandbox (most will by July). If your system qualifies, applying for sandbox participation in Q3 2026 gives you liability protection during compliance.
If you’re using AI in your business: If you use high-risk AI in the EU, assume you’re subject to the AI Act as a user. Track whether your AI vendor has a compliance plan for December 2027. Ask them directly about their timeline.
If you’re advising AI companies: Tell clients with EU exposure that December 2027 is locked in across EU institutions. No extension. If they sell high-risk AI to the EU, they have 16 months and should start conformity assessment immediately. Sandbox participation (launching August 2) is optional but offers liability protection. Help clients understand which member state’s sandbox is relevant to their use case and when to apply.
Who feels this most
HR / Hiring: Your EU deadline is 2 December 2027. Hiring tools are explicitly high-risk. Start documentation, fairness testing, and impact assessment immediately. Member state sandboxes launch August 2, 2026. Apply for sandbox participation by Q4 2026 if your system qualifies.
Financial Services: Credit scoring and loan approval are high-risk under the AI Act. Same 2 December 2027 deadline. If you serve the EU, start compliance work now.
🇺🇸 USA | Colorado AI Act liability framework clarified, but developer-deployer disputes remain contested
Status: 🟡 Watch — Proposed amendments to Colorado’s AI Act are clarifying who’s liable when high-risk systems discriminate
What is happening
In April 2026, Colorado proposed amendments to clarify developer and deployer liability under its AI Act. The amendments state that both developers and deployers may be held liable for violations of anti-discrimination law arising from a covered AI tool, with caveats: developers are only liable when the deployer uses the tool as intended, deployers cannot shift all liability to developers through indemnification agreements, and the use of an AI tool isn’t a defense under anti-discrimination law. This clarification is important because it pushes responsibility onto both parties to ensure compliance.
Why founders should care
If you develop or deploy high-risk AI systems in Colorado, assume you share liability for discrimination. You cannot shift all responsibility to your vendor, and your vendor cannot shift all responsibility to you. Both parties must use reasonable care. For developers, this means documenting intended use cases and providing compliance guidance to deployers. For deployers, this means auditing the systems you use and ensuring they’re deployed as intended. The June 2026 deadline is now critical. If you’re not compliant by then, you face shared liability for any discrimination the system causes.
So what?
If you’re building AI products: If you sell high-risk AI systems in Colorado (hiring, credit, insurance, etc.), document the intended use cases and provide clear guidance to deployers on how to use your system compliantly. You’re liable if the deployer uses it as intended and it discriminates, but not if the deployer misuses it. Document everything. The June deadline is 52 days away.
If you’re using AI in your business: If you use high-risk AI systems in Colorado, you share liability for discrimination with your vendor. Audit your systems and document how they’re deployed. Ensure you’re using them as the vendor intended. The June deadline is 52 days away.
If you’re advising AI companies: Tell clients with Colorado exposure that liability is shared between developers and deployers. Developers can’t shed responsibility through terms of service, and deployers can’t ignore how systems are used. Both parties must ensure compliance by June 2026. Encourage developers and deployers to work together on documentation and testing now.
Who feels this most
HR / Hiring: Colorado’s June deadline for impact assessments and bias testing is immediate. If you build or use AI hiring tools in Colorado, ensure documented evidence of bias testing and compliance with reasonable care standards. The shared liability framework means both vendor and user are on the hook. |
Probably noise
Developments unlikely to affect founders in the next 12 months, and why
🇪🇺 EU | Member states reporting on AI sandbox implementations: Several EU member states (Germany, France, Spain, Denmark) are finalizing their AI regulatory sandbox structures ahead of the August 2, 2026 deadline. Sandbox participation is optional and only relevant if you operate in a specific EU country and want liability protection. Most announcements will be complete by July. Not actionable until you know which member state is relevant to your use case.
🇬🇧 UK | AI Growth Lab consultation results: The UK DSIT opened consultation on AI Growth Lab (regulatory sandboxes) in October 2025. No new updates this week. This remains a policy proposal with no confirmed timeline. Interesting but not actionable yet. Revisit if the government announces a launch date.
The pattern this week
The regulatory floor continues to rise across all three jurisdictions, and this week shows acceleration in three distinct directions.
First, courts are starting to settle questions that policy has left open. The Getty Images case is moving toward trial, and the judge’s April 8 comments suggest copyright claims against AI companies will be tested on the merits. This is forcing policy hands on both sides of the Atlantic. The UK government’s “wait and see” approach is now waiting on actual court rulings. If Getty wins, expect rapid policy shifts in the UK and possibly stronger copyright protections in the Digital Omnibus. Founders cannot wait for courts to decide. Copyright risk is real and growing.
Second, the FTC’s April 3 strategic plan confirms what we saw in enforcement (OkCupid, before the reporting window). The agency is using existing consumer protection law to police AI practices, not waiting for new legislation. The OkCupid settlement is the template for data sharing disputes: undisclosed use of user data for AI training is enforceable violation, and the FTC will pursue it retroactively. This is the enforcement pattern going forward.
Third, the consultation windows in the UK (hiring AI by May 29, children’s safety by May 26) are closing. These are real opportunities to shape enforcement. If you operate in the UK, respond to the ICO and DSIT consultations in the next six weeks. The final guidance will land in Q3 2026 and will be enforceable.
Colorado’s June deadline is the most immediate state-level deadline in the US. Six weeks remain. If you operate in Colorado, you’re not on schedule if you haven’t started impact assessments and bias testing. The shared liability framework (developers and deployers both liable) means vendors and customers need to work together now.
Sector getting the most heat this week
Sector: Media & Publishing
You are now caught between two pressing legal fronts: copyright and content labeling. The Getty case is moving toward trial, and the judge’s April 8 comments suggest copyright claims against AI companies are viable. If you train AI on published works, you face copyright infringement risk that’s becoming clearer, not less. Simultaneously, California’s SB 942 watermarking requirement is effective August 2, 2026, and the UK and EU are moving toward mandatory labeling of AI-generated content. The UK consultation on children’s safety (closes May 26) explicitly addresses AI-generated content. The EU’s Digital Omnibus trilogue is moving toward agreement on AI-generated content marking timelines.
What this means: copyright licensing for model training is no longer optional. The Getty case will clarify whether you need paid licensing or can operate in the gray zone. Assume you need licensing by late 2026 or early 2027. Simultaneously, any generative AI output you create is moving toward mandatory labeling in California (August), the UK (late 2026), and the EU (as part of the trilogue). If you publish AI-generated content, you will need watermarking and labeling infrastructure. Start with California’s August 2 deadline. If you generate images or videos using AI, California SB 942 will require you to embed watermarks automatically and offer free detection tools.
Dates to put in your calendar
29 April 2026 | EU | Digital Omnibus trilogue meeting scheduled. Watch for movement on copyright protections, content labeling timelines, and final agreement targets. A political agreement is expected by late April or May 2026.
26 May 2026 | UK | Consultation deadline for “Growing up in the online world,” including AI chatbot and generative AI obligations. Submit feedback if your product could serve children.
29 May 2026 | UK | Deadline for feedback on ICO consultation on automated recruitment decisions. If you use or sell hiring tools in the UK, respond to the consultation to shape final guidance.
2 June 2026 | USA | Colorado AI Act impact assessment requirement deadline for high-risk systems. If you operate in Colorado and use or build AI for hiring, lending, or consequential decisions, this is your deadline.
2 August 2026 | USA | California SB 942 effective date for AI watermarking and transparency requirements. Covered providers must have watermarking infrastructure and free detection tools operational by this date.
2 August 2026 | EU | Each EU member state must establish at least one AI regulatory sandbox. Sandboxes provide liability protection if you follow sandbox guidelines in good faith while testing AI systems.
August 2026 | EU | EU AI Office enforcement powers activate. The office can now issue fines up to 3 percent of global annual turnover or 15 million euros.
2 December 2027 | EU | Compliance deadline for high-risk AI systems (Annex III) under the AI Act. If you sell hiring, credit, insurance, biometrics, or consequential decision-making tools to the EU, this is your deadline.
2 August 2028 | EU | Compliance deadline for foundational AI models and general-purpose AI systems (Annex I) under the AI Act. If you build general-purpose AI systems serving the EU, this is your deadline.
Sources:
Getty Images v. Stability AI: Judge Hints Getty Could Fortify Copyright Claims
FTC Takes Action Against Match and OkCupid for Data Sharing with AI Firm
AI Enforcement Accelerates as Federal Policy Stalls and States Step In
AI Regulatory Sandboxes: State of Play and Implementation Challenges
