<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The AI Governance Playbook]]></title><description><![CDATA[What you need to know about AI governance, before it becomes your problem. For founders, operators, and creators who are using AI.]]></description><link>https://www.aigovernanceplaybook.com</link><generator>Substack</generator><lastBuildDate>Tue, 14 Apr 2026 14:01:08 GMT</lastBuildDate><atom:link href="https://www.aigovernanceplaybook.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Andy Wood]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[aigovernanceplaybook@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[aigovernanceplaybook@substack.com]]></itunes:email><itunes:name><![CDATA[Andy Wood]]></itunes:name></itunes:owner><itunes:author><![CDATA[Andy Wood]]></itunes:author><googleplay:owner><![CDATA[aigovernanceplaybook@substack.com]]></googleplay:owner><googleplay:email><![CDATA[aigovernanceplaybook@substack.com]]></googleplay:email><googleplay:author><![CDATA[Andy Wood]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Act Now Brief | Monday 13 April 2026 ]]></title><description><![CDATA[AI enforcement actions, live deadlines, and things that genuinely require action this week.]]></description><link>https://www.aigovernanceplaybook.com/p/ai-act-now-brief-monday-13-april-2026</link><guid isPermaLink="false">https://www.aigovernanceplaybook.com/p/ai-act-now-brief-monday-13-april-2026</guid><pubDate>Mon, 13 Apr 2026 07:37:08 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/445850a9-0518-4b07-b8d0-fb0a03d79b8c_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Two items to flag at the top. The FCA chatbot enforcement risk, first covered in the 30 March edition, didn&#8217;t carry through to the 6 April brief. It&#8217;s back here because the action is still live and the 6 April edition effectively dropped a ball that needs picking up. Second, the Digital Omnibus status has moved on since the 6 April brief: the picture is more nuanced than &#8220;it didn&#8217;t pass, August 2 holds.&#8221;</em></p><div><hr></div><h3>&#128308; Act now</h3><p><em>Enforcement actions, live deadlines, and things that genuinely require action this week</em></p><div><hr></div><p><strong>&#127468;&#127463; UK | FCA AI chatbot enforcement risk: still live</strong></p><p>First covered in the 30 March edition. No material new development, but the 6 April brief didn&#8217;t carry this forward and it still requires action.</p><p>On 26 March, the Financial Conduct Authority (FCA) published its latest perimeter report, explicitly naming AI-powered personal finance tools and chatbots as a fast-growing area of unregulated activity. The FCA&#8217;s perimeter reports signal where formal enforcement attention is heading. An AI tool positioned as &#8220;guidance&#8221; that ends up recommending a product, summarising pension exit fees, or suggesting a fund has crossed into regulated advice under the Financial Services and Markets Act 2000 (FSMA). Consumer Duty adds a second layer: if your AI produces a hallucinated rate of return and a customer acts on it, you&#8217;re exposed regardless of your terms of service. The FCA has said that unsupervised generative AI should not be used for substantive financial communications.</p><p>If you acted on this after the 30 March edition, you&#8217;re done. If you haven&#8217;t, this week is the week.</p><p><strong>So what?</strong></p><p><strong>If you&#8217;re building AI products:</strong> Map your product&#8217;s outputs against the FSMA regulated/unregulated advice line. If you can&#8217;t clearly say why a given output is guidance and not advice, that gap is your compliance risk. </p><p><strong>If you&#8217;re using AI in your business:</strong> Document how you&#8217;re supervising any AI tool that produces output a customer might construe as financial guidance. &#8220;We didn&#8217;t know&#8221; is not a Consumer Duty defence. </p><p><strong>If you&#8217;re advising AI companies:</strong> Tell fintech clients the March 26 perimeter report is their signal to get their FSMA and Consumer Duty mapping done before the FCA launches a thematic review.</p><p><strong>Who feels this most:</strong></p><ul><li><p><strong>Fintech and wealthtech:</strong> You&#8217;re the named category. The burden of demonstrating you&#8217;re on the right side of the regulated/unregulated line is yours.</p></li><li><p><strong>HR and benefits tools:</strong> If your AI helps employees understand pension or salary sacrifice options, you may be closer to the advice boundary than you think.</p></li></ul><div><hr></div><h3>&#128993; Heads up</h3><p><em>Developments that are not urgent today but could require action within the next two weeks</em></p><div><hr></div><p><strong>&#127466;&#127482; EU | Digital Omnibus update: more nuanced than &#8220;it didn&#8217;t pass&#8221;</strong></p><p>The 30 March and 6 April editions both reported the Digital Omnibus vote as having failed on March 26, with August 2 holding firm. That was accurate at the time, but the legislative picture has since moved.</p><p>On 18 March, the IMCO and LIBE committees in the European Parliament adopted their joint report on the Omnibus AI package. The proposal would delay Annex III high-risk AI obligations by up to 16 months, pushing the enforcement date to as late as December 2027 or August 2028. The mechanism is conditional: the delay takes effect only once the European Commission confirms that the harmonised technical standards needed for compliance are available (two EU standardisation bodies missed their 2025 deadline and are now targeting the end of 2026). Once confirmed, the deadline shifts to six months later for Annex III systems and twelve months for Annex I.</p><p>The critical detail: for the delay to become law before August 2, a final political agreement must be reached by around June. Negotiations are live. If an agreement lands in time, August 2 moves. If not, it stays.</p><p>The planning position is unchanged from previous editions: build for August 2 and treat a confirmed delay as a bonus. But now you have a clearer picture of when you&#8217;ll know for sure.</p><p><strong>So what?</strong></p><p><strong>If you&#8217;re building AI products:</strong> Do not pause documentation or conformity assessment work on the assumption that a delay is coming. June is the checkpoint. </p><p><strong>If you&#8217;re using AI in your business:</strong> If you deploy AI in EU Annex III categories (hiring, credit, biometrics, education), keep your risk documentation and impact assessments going. Don&#8217;t wait. </p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients the June window is the one to watch. A deal before June means the delay takes legal effect. After June, August 2 stands. Build backwards from August 2 with June as the review point.</p><div><hr></div><p><strong>&#127482;&#127480; USA | DOJ AI Litigation Task Force first case expected any day</strong></p><p>The Department of Justice (DOJ) AI Litigation Task Force was established in January 2026, directed to challenge state AI laws on grounds of unconstitutional regulation of interstate commerce and federal preemption. The Department of Commerce&#8217;s evaluation of state AI laws was due on 11 March. Legal analysts had been expecting the first case &#8220;by spring.&#8221; That window is now.</p><p>Colorado&#8217;s AI Act (enforcing June 30, 2026) is widely seen as the most likely first target. If the DOJ files and wins an early injunction, the US state compliance picture shifts almost immediately. Colorado&#8217;s June 30 deadline could freeze before it lands.</p><p>This doesn&#8217;t mean stop compliance work. It means don&#8217;t over-invest in state-specific architecture before you know what gets filed.</p><p><strong>So what?</strong></p><p><strong>If you&#8217;re building AI products:</strong> Keep your US state compliance approach modular. Don&#8217;t lock in a Colorado-specific architecture this week. </p><p><strong>If you&#8217;re using AI in your business:</strong> Texas TRAIGA (the Texas Responsible Artificial Intelligence Governance Act, in force since 1 January 2026) is not a target of this litigation. If you have Texas users or employees and deploy high-risk AI, your obligations there are live regardless of what happens with Colorado. </p><p><strong>If you&#8217;re advising AI companies:</strong> Your clients need to know that a DOJ filing with an injunction would directly affect whether Colorado compliance is required by June 30. Watch for it.</p><p><strong>Who feels this most:</strong></p><ul><li><p><strong>HR and hiring tools:</strong> Colorado&#8217;s Act specifically targets algorithmic discrimination in employment decisions. If the DOJ enjoins it, that compliance deadline may freeze.</p></li></ul><div><hr></div><h3>&#128994; On the radar</h3><p><em>Worth knowing about, no action needed yet</em></p><div><hr></div><ul><li><p><strong>&#127482;&#127480; Colorado AI Act: 11 weeks to June 30.</strong> No change from the 6 April edition. High-risk AI in employment, housing, credit, and healthcare must comply by 30 June. The DOJ case could change this. Assume it won&#8217;t until you hear otherwise.</p></li><li><p><strong>&#127466;&#127482; EU AI content labelling: August 2, 2026.</strong> Second draft of the Code of Practice on marking and labelling of AI-generated content is published, final version due in June. From 2 August, audio, image, video, and text outputs from generative AI must carry machine-readable markings. This obligation is not affected by the Digital Omnibus delay proposal.</p></li><li><p><strong>&#127466;&#127482; GPAI enforcement activates August 2, 2026.</strong> General-Purpose AI (GPAI) model obligations have been in force since August 2025. Enforcement powers will fully activate in four months. Fines reach up to 15 million euros or 3% of global revenue. The collaborative window with the EU AI Office is closing.</p></li><li><p><strong>&#127482;&#127480; Texas TRAIGA in force with real penalties.</strong> In force since 1 January 2026. Uncurable violations: $80,000 to $200,000 per incident, up to $40,000 per day ongoing. Exists regardless of what the DOJ does to other state laws.</p></li><li><p><strong>&#127468;&#127463; ICO automated decision-making guidance is coming.</strong> The Information Commissioner&#8217;s Office (ICO) is preparing a consultation draft on automated decision-making and profiling. No date confirmed. When it arrives, it will set the practical enforcement lines for AI in hiring, benefits, and credit in the UK.</p></li><li><p><strong>&#127468;&#127463; FCA Mills Review due summer 2026.</strong> The independent review into AI in financial services is expected to report this summer. Its recommendations will determine whether the FCA moves from principles-based oversight to prescriptive rules. Relevant if you&#8217;re in advisory, credit, or fraud detection in UK financial services.</p></li><li><p><strong>&#127482;&#127480; FTC AI-washing enforcement is continuing.</strong> More than a dozen cases in 2025. No change since the 6 April edition. If your marketing overstates what your AI does, you&#8217;re in scope regardless of size.</p></li></ul><div><hr></div><h3>The one thing to do this week</h3><p>If you haven&#8217;t acted on the FCA perimeter report since it first ran in the 30 March edition, do it now. Map your product outputs against the FSMA-regulated/unregulated line and document your reasoning. That&#8217;s the document the FCA will ask for if it comes looking.</p><p>If you&#8217;re already across the FCA item, make sure your EU compliance work hasn&#8217;t slowed down on the assumption that the Digital Omnibus delay is confirmed. It isn&#8217;t yet. June is when you&#8217;ll know.</p><div><hr></div><h3>Deadline tracker</h3><p><strong>EU</strong> | High-risk AI systems (Annex III): employment, credit, education, biometrics | 2 August 2026 | Proposed delay to Dec 2027/Aug 2028 pending political agreement before June</p><p><strong>EU</strong> | GPAI model enforcement (AI Act and General-Purpose AI Code of Practice) | 2 August 2026 | Coming up</p><p><strong>EU</strong> | AI-generated content labelling obligations (Article 50) | 2 August 2026 | Coming up</p><p><strong>EU</strong> | AI content labelling Code of Practice finalised | June 2026 | In drafting</p><p><strong>EU</strong> | Digital Omnibus: window for political agreement to protect delay | Before June 2026 | In negotiation</p><p><strong>USA</strong> | Colorado AI Act: high-risk AI in employment, credit, housing, healthcare | 30 June 2026 | 11 weeks away; possibly enjoined by DOJ</p><p><strong>USA</strong> | Texas TRAIGA high-risk AI obligations | 1 January 2026 | Now in force</p><p><strong>USA</strong> | DOJ AI Litigation Task Force: first case | Spring 2026 | Imminent</p><p><strong>USA</strong> | Oregon SB 1546 and Washington HB 2225 (AI companion chatbots, private liability) | 1 January 2027 | Coming up</p><p><strong>UK</strong> | ICO automated decision-making guidance consultation | TBD 2026 | Draft expected</p><p><strong>UK</strong> | FCA Mills Review report | Summer 2026 | Coming up</p><div><hr></div><p><em>Sources:</em></p><ul><li><p><a href="https://www.insideglobaltech.com/2026/04/09/uk-financial-services-regulators-approach-to-artificial-intelligence-in-2026/">UK financial services regulators AI approach 2026</a></p></li><li><p><a href="https://www.resultsense.com/news/2026-04-09-uk-fca-pra-ai-approach-2026">UK FCA, PRA hold principles-based AI line in 2026</a></p></li><li><p><a href="https://www.fca.org.uk/firms/innovation/ai-approach">FCA AI approach</a></p></li><li><p><a href="https://www.onetrust.com/blog/eu-digital-omnibus-proposes-delay-of-ai-compliance-deadlines/">EU Digital Omnibus: proposed AI Act delay (OneTrust)</a></p></li><li><p><a href="https://iapp.org/news/a/eu-digital-omnibus-analysis-of-key-changes">EU Digital Omnibus: IAPP analysis</a></p></li><li><p><a href="https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-ai-regulation-proposal">Digital Omnibus AI regulation proposal (European Commission)</a></p></li><li><p><a href="https://www.europarl.europa.eu/legislative-train/package-digital-package/file-digital-omnibus-on-ai">European Parliament legislative train: Digital Omnibus on AI</a></p></li><li><p><a href="https://artificialintelligenceact.eu/implementation-timeline/">EU AI Act implementation timeline</a></p></li><li><p><a href="https://code-of-practice.ai/">EU AI Act: GPAI Code of Practice</a></p></li><li><p><a href="https://digital-strategy.ec.europa.eu/en/library/commission-publishes-second-draft-code-practice-marking-and-labelling-ai-generated-content">EU AI Act content labelling Code of Practice second draft</a></p></li><li><p><a href="https://ico.org.uk/about-the-ico/our-information/our-strategies-and-plans/artificial-intelligence-and-biometrics-strategy/ai-and-biometrics-strategy-update-march-2026/">ICO AI and biometrics strategy update, March 2026</a></p></li><li><p><a href="https://www.bakerlaw.com/insights/navigating-the-emerging-federal-state-ai-showdown-doj-establishes-ai-litigation-task-force/">DOJ AI Litigation Task Force: BakerHostetler</a></p></li><li><p><a href="https://www.justice.gov/ag/media/1422986/dl?inline=">DOJ AI Litigation Task Force memo</a></p></li><li><p><a href="https://www.swept.ai/post/state-ai-regulations-2026-guide">State AI regulations 2026 guide</a></p></li><li><p><a href="https://www.kslaw.com/news-and-insights/new-state-ai-laws-are-effective-on-january-1-2026-but-a-new-executive-order-signals-disruption">Colorado AI Act and executive order disruption: King &amp; Spalding</a></p></li><li><p><a href="https://www.lw.com/en/insights/ai-executive-order-targets-state-laws-and-seeks-uniform-federal-standards">Trump executive order on state AI law preemption: LW</a></p></li><li><p><a href="https://www.morganlewis.com/pubs/2026/04/ai-enforcement-accelerates-as-federal-policy-stalls-and-states-step-in">AI enforcement accelerates as federal policy stalls: Morgan Lewis</a></p></li></ul><div><hr></div><h6><em><strong>DISCLAIMER</strong></em></h6><h6><em>The information is intended to be helpful but is in no way a substitute for seeking professional advice for your specific situation or intent. This applies to business, financial, legal, or other matters discussed herein. Please read the full <a href="https://aigovernanceplaybook.substack.com/p/disclaimer">DISCLAIMER</a></em></h6><h6><em><strong>Copyright Notice</strong></em></h6><h6><em>The information in this document is protected by international copyright laws. You may use the information for personal purposes but you are not permitted to duplicate it or distribute it to anyone else. This copy is for you only.</em></h6>]]></content:encoded></item><item><title><![CDATA[What's Building | Thursday 9 April 2026]]></title><description><![CDATA[A forward-looking briefing on AI regulation across the USA, UK, and EU. What's moving through legislative pipelines right now, and what it means for your business in the next three to six months.]]></description><link>https://www.aigovernanceplaybook.com/p/ai-whats-building-thursday-9-april-2026</link><guid isPermaLink="false">https://www.aigovernanceplaybook.com/p/ai-whats-building-thursday-9-april-2026</guid><pubDate>Thu, 09 Apr 2026 14:52:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/59f4f4f1-1988-4fa9-8fae-8ecfe14c2bba_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>&#127482;&#127480; USA | Copyright infringement case against AI image generators gets clearer in court</h2><p><strong>Status: &#128993; Watch &#8212; A judge is letting Getty Images amend its lawsuit against Stability AI, signaling that copyright infringement claims against AI companies may be viable</strong></p><h3>What is happening</h3><p>On April 8, 2026, a federal judge in San Francisco indicated that Getty Images can strengthen its copyright lawsuit against Stability AI. During a motion hearing, the judge suggested Getty could &#8220;beef up&#8221; its core infringement claims, though she pressed the company on whether it had adequately demonstrated Stability&#8217;s intent to facilitate copyright infringement. The underlying allegation is that Stability scraped more than 12 million Getty photographs without permission to train its image generator. </p><p>This follows the UK High Court&#8217;s November 2025 ruling in the same case, which largely rejected Getty&#8217;s claims on copyright infringement but preserved findings of fact that Getty is now using to bolster the US lawsuit.</p><h3>Why founders should care</h3><p>The Getty case is the first major test of whether copyright law applies to AI model training. The US judge&#8217;s comments suggest infringement claims will survive early dismissal, which means the case will go to trial. If Getty wins, it creates liability for any AI company that trained on copyrighted content without licenses or explicit consent. </p><p>The UK and US courts are still working through the law, but the direction is becoming clearer: scraping copyrighted works to train models is legally risky. If you have built models on copyright material, assume this exposure will be clarified within 12 months.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> If you&#8217;ve trained generative models on copyright material without licenses, you face legal risk. The Getty case will clarify the law by late 2026 or 2027. Consider negotiating content licensing agreements with copyright holders now, or be prepared for injunctions or damages.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use generative AI trained on copyright material, you inherit some of this risk. This is less urgent than direct model building, but licensing disputes between model creators and copyright holders could affect your AI vendor&#8217;s stability. Ask your AI provider about their data sourcing and licensing strategy.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients with copyright exposure that the Getty case is moving toward trial. Copyright licensing for model training isn&#8217;t yet settled, but founders should assume they may need to pay for rights or negotiate consent agreements with major copyright holders. This is a material risk to value and product viability.</p><h3>Who feels this most</h3><ul><li><p><strong>Media &amp; Publishing:</strong> If you train AI on published works, you&#8217;re directly at risk. The Getty case will likely force a choice between licensing agreements or limited model scope. If you license models from third parties, ensure your vendor has proper copyright clearances for the training data.</p></li></ul><div><hr></div><h3>&#127482;&#127480; USA | FTC publishes its 5-year enforcement roadmap, putting AI and Big Tech front and center</h3><p><strong>Status: &#128993; Watch &#8212; The FTC&#8217;s new strategic plan confirms AI enforcement is a top priority, with focus on deceptive claims, data practices, and children&#8217;s safety</strong></p><h3>What is happening</h3><p>On April 3, 2026, the Federal Trade Commission published its FY 2026-2030 Strategic Plan, setting enforcement priorities for the next five years. The plan explicitly prioritizes AI-related enforcement, with named focus areas including deceptive AI practices (AI-washing), data privacy violations tied to AI, and unfair algorithmic decision-making. The plan also signals the FTC will deploy AI and machine learning tools internally to improve its own enforcement capabilities. </p><p>This follows the agency&#8217;s March 30 settlement with OkCupid and Match Group, in which the companies agreed to a consent decree after sharing 3 million users&#8217; photos and location data with an AI firm for training without user consent or disclosure.</p><h3>Why founders should care</h3><p>The FTC&#8217;s strategic plan is a commitment to the enforcement pace we have already seen. The OkCupid settlement is the template: if you collect user data and share it for AI training without explicit user notice, the FTC will come after you. The agency is also signaling it will use existing consumer protection law (not new legislation) to police AI practices. </p><p>This means the boundaries of what&#8217;s &#8220;deceptive&#8221; or &#8220;unfair&#8221; will be set by enforcement cases, not by clear rules. Expect FTC action against companies making unsubstantiated AI safety claims, misrepresenting data sourcing, and failing to disclose how user data is used for model training.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> Audit your data sourcing practices now. If you collect user data and use it for AI training, your privacy policy must disclose this explicitly. Don&#8217;t make claims about your AI system&#8217;s safety, fairness, or capabilities unless you can document them. If you can&#8217;t prove a claim, cut it. The FTC will challenge it.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use third-party AI vendors to process customer data, ensure your contracts require them to disclose data sourcing and consent practices. The FTC is holding both data collectors and data users accountable. Don&#8217;t assume your vendor&#8217;s terms are compliant.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients to review their data sourcing practices and privacy disclosures immediately. The OkCupid settlement shows the FTC will pursue companies that share data for AI training without disclosure, even if the violation happened years ago. Clients should assume the FTC will scrutinize any claim they make about fairness, safety, or AI performance.</p><h3>Who feels this most</h3><ul><li><p><strong>Media &amp; Publishing:</strong> If you use user-generated content or user data to train AI systems, disclose it clearly in your privacy policy and terms of service. The OkCupid precedent is direct: undisclosed data sharing for AI training is an enforceable violation.</p></li><li><p><strong>AdTech &amp; Retail:</strong> If you use behavioral data to train recommendation or targeting algorithms, ensure consent is explicit and documented. The FTC is watching algorithmic discrimination and deceptive targeting practices.</p></li></ul><div><hr></div><h3>&#127482;&#127480; USA | Federal preemption of state AI laws remains uncertain, as states press ahead with enforcement</h3><p><strong>Status: &#128308; Watch &#8212; Still developing since our last brief. The preemption fight shows no resolution, and state laws are moving forward on their own timelines</strong></p><h3>What is happening</h3><p>As we covered last week, the Trump administration&#8217;s push for federal preemption of state AI laws through litigation remains in the pipeline, but states are not waiting for the outcome. Colorado&#8217;s implementation deadline moved to June 2026. New York City&#8217;s bias audit requirement is in enforcement mode. California finalized new AI transparency requirements with an August 2, 2026 effective date. </p><p>The administration&#8217;s Commerce Department is preparing to challenge state laws in court on Dormant Commerce Clause grounds, but Congress has already rejected this framing. Colorado proposed amendments to clarify developer and deployer liability under its AI Act, signaling that states are actively refining their frameworks rather than pausing for federal litigation.</p><h3>Why founders should care</h3><p>The preemption fight isn&#8217;t freezing state laws in place. State deadlines are moving forward regardless of the federal challenge. If you operate across state lines, assume all applicable state laws are binding through 2026 and beyond. Colorado&#8217;s June deadline is now less than 60 days away. California&#8217;s August 2 effective date for watermarking and transparency (under SB 942) is four months away. Multi-state operators cannot wait for federal resolution. Build for the hardest state now.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> If you operate in more than one US state, audit which states you serve and what each state requires. Build for the state with the hardest requirements first. Colorado&#8217;s June deadline is immediate. California&#8217;s August 2 deadline is coming. Don&#8217;t assume federal preemption will save you.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use AI in hiring, lending, or consequential decisions across state lines, you&#8217;re subject to all applicable state laws in states where your decisions affect residents. Map your exposure by state and compliance deadline. Start with Colorado and California.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients that federal preemption is uncertain but state liability is certain. Help them build a state-by-state compliance map with specific deadlines and requirements. If they don&#8217;t have a Colorado roadmap by May 1, they&#8217;ll miss the June deadline.</p><h3>Who feels this most</h3><ul><li><p><strong>HR / Hiring:</strong> New York City&#8217;s bias audit requirement is live. Colorado&#8217;s deadline is June. California&#8217;s SB 942 watermarking rule is August. If you use AI for hiring across these states, you need compliance by the nearest deadline. Start with NYC and Colorado.</p></li><li><p><strong>Financial Services:</strong> If you use AI for lending or credit decisions across state lines, assume state AI laws apply. Colorado&#8217;s June deadline requires impact assessments for high-risk systems. New York and California have parallel requirements. Map your exposure by June 1.</p></li></ul><div><hr></div><h3>&#127468;&#127463; UK | Copyright and AI policy remains frozen, but the door is cracking open</h3><p><strong>Status: &#128993; Watch &#8212; Still developing since our last brief. The UK government still has no preferred policy on copyright and AI, but courts on both sides of the Atlantic are starting to clarify the law</strong></p><h3>What is happening</h3><p>As we covered last week, the UK government abandoned its opt-out proposal for copyright and AI in March 2026 and is now adopting a &#8220;wait and see&#8221; approach. The Getty Images case in the UK High Court (November 2025) largely rejected Getty&#8217;s copyright infringement claims, but the story is moving again with the US Getty case. The US federal judge&#8217;s April 8 comments suggest copyright claims against AI companies may survive initial dismissal, which could influence UK policy in the coming months. The government is monitoring litigation on both sides of the Atlantic.</p><h3>Why founders should care</h3><p>The UK copyright question is still unresolved, but courts are now providing answers. If the US Getty case succeeds, it will pressure the UK government to clarify its position. For now, you have a grace period on copyright, but it isn&#8217;t a long one. By late 2026 or early 2027, the government is likely to revisit this question based on court outcomes. The EU&#8217;s approach to copyright protection is stricter (as reflected in the Digital Omnibus trilogue), and the UK may follow suit if courts signal that copyright protection is necessary.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> If you train generative models on copyright material and serve UK or EU customers, you&#8217;re in a legal gray zone that&#8217;s narrowing. The Getty case will clarify the risk by late 2026. Consider negotiating content licensing agreements now, before court decisions force your hand.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use generative AI trained on copyright material (most large models do), track the Getty case and any UK policy updates. No action needed now, but this could change if courts rule against AI companies.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients with UK copyright exposure that courts are starting to clarify the law. The UK government&#8217;s pause isn&#8217;t a permanent solution. If the Getty case succeeds in the US, expect UK policy to shift. Budget for licensing or model retraining by late 2026.</p><h3>Who feels this most</h3><ul><li><p><strong>Media &amp; Publishing:</strong> If you train AI on published works and serve UK customers, the Getty case outcome will affect your legal risk. The UK is less restrictive than the EU right now, but that advantage may not last. Start licensing content for model training or prepare to narrow your training data.</p></li></ul><div><hr></div><h3>&#127468;&#127463; UK | ICO consultation on automated hiring decisions still open until 29 May</h3><p><strong>Status: &#128993; Watch &#8212; Still developing since our last brief. The ICO&#8217;s enforcement tone is stricter than previous guidance, and the consultation window is closing</strong></p><h3>What is happening</h3><p>As we covered last week, the Information Commissioner&#8217;s Office published a report on March 31, 2026, based on evidence from over 30 employers. The report finds that most employers think they&#8217;re using decision support, but evidence shows they&#8217;re using fully automated decisions with no meaningful human involvement. The ICO is consulting on draft guidance until 29 May 2026, with a focus on bias testing, transparency with candidates, and recourse rights. No new updates this week, but the consultation window is now about 50 days away from closing.</p><h3>Why founders should care</h3><p>This is your chance to shape UK hiring AI enforcement. The consultation closes May 29. If you build hiring tools or use AI for hiring in the UK, respond to the consultation if you want to influence the final guidance. The ICO&#8217;s tone is notably strict, and the final guidance will become your compliance floor. The key finding from the report is important: meaningful human involvement must be genuine and active, not a rubber stamp. If your hiring system has human review, document it and make it active.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> If you sell hiring tools to the UK, use the consultation window (closes 29 May) to respond and shape the final rules. The final guidance will land in Q3 2026 and will become a compliance requirement. You&#8217;ll need bias testing, transparency, and genuine human review. Start building these features now if you don&#8217;t have them.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use AI to screen candidates in the UK, audit your system now. Do you have evidence of bias testing? Can candidates see how you screened them? Can they request meaningful human review? If the answer to any is no, fix it by May 2026.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell UK clients that the ICO consultation (closes 29 May) is your window to shape the final rules. If your clients build hiring tools, submit feedback. The final guidance lands in Q3 2026 and will be enforceable.</p><h3>Who feels this most</h3><ul><li><p><strong>HR / Hiring:</strong> This is directly about you. The ICO expects meaningful human involvement, regular bias testing, and transparent communication with candidates. Document your bias testing and human review process now. The consultation closes 29 May.</p></li></ul><div><hr></div><h3>&#127468;&#127463; UK | Children&#8217;s online safety consultation open until 26 May on AI obligations</h3><p><strong>Status: &#128993; Watch &#8212; Still developing since our last brief. The consultation window is closing, and if your product serves children, this is your chance to influence the rules</strong></p><h3>What is happening</h3><p>As we covered last week, the UK Department for Science, Innovation and Technology launched a consultation on March 2, 2026 called &#8220;Growing up in the online world: a national conversation.&#8221; The consultation includes explicit obligations for AI chatbots and generative AI services, with proposed measures including age assurance mechanisms, a potential statutory minimum age for social media, and new safeguards for AI products that children can access. The consultation closes on 26 May 2026. No new updates this week, but the response window is now about 47 days away from closing.</p><h3>Why founders should care</h3><p>If you build chat products, generative AI tools, or any service that might be accessible to children, this consultation is directly about you. The consultation is still broad enough to influence, and 47 days remain to submit feedback. The UK is moving toward age-gating and content safety obligations for AI products serving children. The final rules will likely land in late 2026 or early 2027, with implementation by 2027 at the earliest. For now, respond to the consultation if you want to shape the outcome.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> If your AI product is accessible to anyone under 16 in the UK, the consultation (closes 26 May) is your window to respond. Final guidance will likely require age verification and age-appropriate safeguards. Think now about how to implement these features. The implementation deadline is likely 2027 at the earliest.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use AI chatbots or generative AI services in the UK, track this consultation. Final rules may require age verification or content filtering. Most small teams, this is a low priority right now. Watch for final guidance in Q3 2026.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell UK clients that child safety regulation for AI is coming. The consultation (closes 26 May) is your window to shape it. If your product could serve children, respond now. Final rules land in late 2026 or early 2027.</p><h3>Who feels this most</h3><ul><li><p><strong>EdTech:</strong> If you build educational AI products used by students under 16 in the UK, this consultation is directly about you. Age verification and content safety will be mandatory. Start thinking now about how to implement age checks and protect minors from inappropriate outputs.</p></li></ul><div><hr></div><h3>&#127466;&#127482; EU | Digital Omnibus trilogue enters final stage, with next meeting set for 28 April</h3><p><strong>Status: &#128308; Watch &#8212; Still developing since our last brief. Trilogue is moving fast, and a political agreement is expected by late April or May</strong></p><h3>What is happening</h3><p>As we covered last week, the European Parliament confirmed its position on March 26, 2026, and trilogue negotiations began the same day. The next trilogue session is scheduled for 28 April 2026, just 19 days from now. Political negotiations are already intense, with Parliament and Council closely aligned on several major issues, including fixed compliance deadlines: Annex III systems (hiring, credit, biometrics) by 2 December 2027, and Annex I systems (foundational models) by 2 August 2028. The Cypriot Presidency is targeting a political agreement by April or May 2026, well before the AI Act&#8217;s August 2, 2026 general application date.</p><h3>Why founders should care</h3><p>The AI Act compliance timeline is now locked in. There is no extension coming. If you sell high-risk AI systems to the EU (hiring, credit, insurance, biometrics, or consequential decision-making), your deadline is 2 December 2027, which is 16 months away. The trilogue is moving toward agreement, and the final rules will be published shortly after. For EU-focused founders, compliance work should start now, not after the trilogue concludes. EU member state sandboxes open August 2, 2026, which could provide liability protection if you qualify and participate. Apply for sandbox access in Q3 2026 if your system qualifies.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> If you sell to the EU and your product makes decisions about hiring, credit, or other high-risk use cases, your hard deadline is 2 December 2027. Start conformity assessment and documentation work immediately. Member state sandboxes open August 2, 2026. Check whether your member state has announced its sandbox (most will by July). If your system qualifies, applying for sandbox participation in Q3 2026 gives you liability protection during compliance.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use high-risk AI in the EU, assume you&#8217;re subject to the AI Act as a user. Track whether your AI vendor has a compliance plan for December 2027. Ask them directly about their timeline.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients with EU exposure that December 2027 is locked in across EU institutions. No extension. If they sell high-risk AI to the EU, they have 16 months and should start conformity assessment immediately. Sandbox participation (launching August 2) is optional but offers liability protection. Help clients understand which member state&#8217;s sandbox is relevant to their use case and when to apply.</p><h3>Who feels this most</h3><ul><li><p><strong>HR / Hiring:</strong> Your EU deadline is 2 December 2027. Hiring tools are explicitly high-risk. Start documentation, fairness testing, and impact assessment immediately. Member state sandboxes launch August 2, 2026. Apply for sandbox participation by Q4 2026 if your system qualifies.</p></li><li><p><strong>Financial Services:</strong> Credit scoring and loan approval are high-risk under the AI Act. Same 2 December 2027 deadline. If you serve the EU, start compliance work now.</p></li></ul><div><hr></div><h3>&#127482;&#127480; USA | Colorado AI Act liability framework clarified, but developer-deployer disputes remain contested</h3><p><strong>Status: &#128993; Watch &#8212; Proposed amendments to Colorado&#8217;s AI Act are clarifying who&#8217;s liable when high-risk systems discriminate</strong></p><h3>What is happening</h3><p>In April 2026, Colorado proposed amendments to clarify developer and deployer liability under its AI Act. The amendments state that both developers and deployers may be held liable for violations of anti-discrimination law arising from a covered AI tool, with caveats: developers are only liable when the deployer uses the tool as intended, deployers cannot shift all liability to developers through indemnification agreements, and the use of an AI tool isn&#8217;t a defense under anti-discrimination law. This clarification is important because it pushes responsibility onto both parties to ensure compliance.</p><h3>Why founders should care</h3><p>If you develop or deploy high-risk AI systems in Colorado, assume you share liability for discrimination. You cannot shift all responsibility to your vendor, and your vendor cannot shift all responsibility to you. Both parties must use reasonable care. For developers, this means documenting intended use cases and providing compliance guidance to deployers. For deployers, this means auditing the systems you use and ensuring they&#8217;re deployed as intended. The June 2026 deadline is now critical. If you&#8217;re not compliant by then, you face shared liability for any discrimination the system causes.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> If you sell high-risk AI systems in Colorado (hiring, credit, insurance, etc.), document the intended use cases and provide clear guidance to deployers on how to use your system compliantly. You&#8217;re liable if the deployer uses it as intended and it discriminates, but not if the deployer misuses it. Document everything. The June deadline is 52 days away.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use high-risk AI systems in Colorado, you share liability for discrimination with your vendor. Audit your systems and document how they&#8217;re deployed. Ensure you&#8217;re using them as the vendor intended. The June deadline is 52 days away.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients with Colorado exposure that liability is shared between developers and deployers. Developers can&#8217;t shed responsibility through terms of service, and deployers can&#8217;t ignore how systems are used. Both parties must ensure compliance by June 2026. Encourage developers and deployers to work together on documentation and testing now.</p><h3>Who feels this most</h3><ul><li><p><strong>HR / Hiring:</strong> Colorado&#8217;s June deadline for impact assessments and bias testing is immediate. If you build or use AI hiring tools in Colorado, ensure documented evidence of bias testing and compliance with reasonable care standards. The shared liability framework means both vendor and user are on the hook. |</p></li></ul><div><hr></div><h2>Probably noise</h2><p><em>Developments unlikely to affect founders in the next 12 months, and why</em></p><ul><li><p><strong>&#127466;&#127482; EU | Member states reporting on AI sandbox implementations:</strong> Several EU member states (Germany, France, Spain, Denmark) are finalizing their AI regulatory sandbox structures ahead of the August 2, 2026 deadline. Sandbox participation is optional and only relevant if you operate in a specific EU country and want liability protection. Most announcements will be complete by July. Not actionable until you know which member state is relevant to your use case.</p></li><li><p><strong>&#127468;&#127463; UK | AI Growth Lab consultation results:</strong> The UK DSIT opened consultation on AI Growth Lab (regulatory sandboxes) in October 2025. No new updates this week. This remains a policy proposal with no confirmed timeline. Interesting but not actionable yet. Revisit if the government announces a launch date.</p></li></ul><div><hr></div><h2>The pattern this week</h2><p>The regulatory floor continues to rise across all three jurisdictions, and this week shows acceleration in three distinct directions.</p><p>First, courts are starting to settle questions that policy has left open. The Getty Images case is moving toward trial, and the judge&#8217;s April 8 comments suggest copyright claims against AI companies will be tested on the merits. This is forcing policy hands on both sides of the Atlantic. The UK government&#8217;s &#8220;wait and see&#8221; approach is now waiting on actual court rulings. If Getty wins, expect rapid policy shifts in the UK and possibly stronger copyright protections in the Digital Omnibus. Founders cannot wait for courts to decide. Copyright risk is real and growing.</p><p>Second, the FTC&#8217;s April 3 strategic plan confirms what we saw in enforcement (OkCupid, before the reporting window). The agency is using existing consumer protection law to police AI practices, not waiting for new legislation. The OkCupid settlement is the template for data sharing disputes: undisclosed use of user data for AI training is enforceable violation, and the FTC will pursue it retroactively. This is the enforcement pattern going forward.</p><p>Third, the consultation windows in the UK (hiring AI by May 29, children&#8217;s safety by May 26) are closing. These are real opportunities to shape enforcement. If you operate in the UK, respond to the ICO and DSIT consultations in the next six weeks. The final guidance will land in Q3 2026 and will be enforceable.</p><p>Colorado&#8217;s June deadline is the most immediate state-level deadline in the US. Six weeks remain. If you operate in Colorado, you&#8217;re not on schedule if you haven&#8217;t started impact assessments and bias testing. The shared liability framework (developers and deployers both liable) means vendors and customers need to work together now.</p><div><hr></div><h2>Sector getting the most heat this week</h2><p><strong>Sector: Media &amp; Publishing</strong></p><p>You are now caught between two pressing legal fronts: copyright and content labeling. The Getty case is moving toward trial, and the judge&#8217;s April 8 comments suggest copyright claims against AI companies are viable. If you train AI on published works, you face copyright infringement risk that&#8217;s becoming clearer, not less. Simultaneously, California&#8217;s SB 942 watermarking requirement is effective August 2, 2026, and the UK and EU are moving toward mandatory labeling of AI-generated content. The UK consultation on children&#8217;s safety (closes May 26) explicitly addresses AI-generated content. The EU&#8217;s Digital Omnibus trilogue is moving toward agreement on AI-generated content marking timelines.</p><p>What this means: copyright licensing for model training is no longer optional. The Getty case will clarify whether you need paid licensing or can operate in the gray zone. Assume you need licensing by late 2026 or early 2027. Simultaneously, any generative AI output you create is moving toward mandatory labeling in California (August), the UK (late 2026), and the EU (as part of the trilogue). If you publish AI-generated content, you will need watermarking and labeling infrastructure. Start with California&#8217;s August 2 deadline. If you generate images or videos using AI, California SB 942 will require you to embed watermarks automatically and offer free detection tools.</p><div><hr></div><h2>Dates to put in your calendar</h2><p><strong>29 April 2026</strong> | EU | Digital Omnibus trilogue meeting scheduled. Watch for movement on copyright protections, content labeling timelines, and final agreement targets. A political agreement is expected by late April or May 2026.</p><p><strong>26 May 2026</strong> | UK | Consultation deadline for &#8220;Growing up in the online world,&#8221; including AI chatbot and generative AI obligations. Submit feedback if your product could serve children.</p><p><strong>29 May 2026</strong> | UK | Deadline for feedback on ICO consultation on automated recruitment decisions. If you use or sell hiring tools in the UK, respond to the consultation to shape final guidance.</p><p><strong>2 June 2026</strong> | USA | Colorado AI Act impact assessment requirement deadline for high-risk systems. If you operate in Colorado and use or build AI for hiring, lending, or consequential decisions, this is your deadline.</p><p><strong>2 August 2026</strong> | USA | California SB 942 effective date for AI watermarking and transparency requirements. Covered providers must have watermarking infrastructure and free detection tools operational by this date.</p><p><strong>2 August 2026</strong> | EU | Each EU member state must establish at least one AI regulatory sandbox. Sandboxes provide liability protection if you follow sandbox guidelines in good faith while testing AI systems.</p><p><strong>August 2026</strong> | EU | EU AI Office enforcement powers activate. The office can now issue fines up to 3 percent of global annual turnover or 15 million euros.</p><p><strong>2 December 2027</strong> | EU | Compliance deadline for high-risk AI systems (Annex III) under the AI Act. If you sell hiring, credit, insurance, biometrics, or consequential decision-making tools to the EU, this is your deadline.</p><p><strong>2 August 2028</strong> | EU | Compliance deadline for foundational AI models and general-purpose AI systems (Annex I) under the AI Act. If you build general-purpose AI systems serving the EU, this is your deadline.</p><div><hr></div><p>Sources:</p><ul><li><p><a href="https://news.bloomberglaw.com/litigation/gettys-ai-infringement-claims-could-be-fortified-judge-hints">Getty Images v. Stability AI: Judge Hints Getty Could Fortify Copyright Claims</a></p></li><li><p><a href="https://www.ftc.gov/news-events/news/press-releases/2026/04/ftc-publishes-new-strategic-plan">FTC Publishes New Strategic Plan</a></p></li><li><p><a href="https://www.ftc.gov/news-events/news/press-releases/2026/03/ftc-takes-action-against-match-okcupid-deceiving-users-sharing-personal-data-third-party">FTC Takes Action Against Match and OkCupid for Data Sharing with AI Firm</a></p></li><li><p><a href="https://www.morganlewis.com/pubs/2026/04/ai-enforcement-accelerates-as-federal-policy-stalls-and-states-step-in">AI Enforcement Accelerates as Federal Policy Stalls and States Step In</a></p></li><li><p><a href="https://www.lewissilkin.com/en/insights/2026/04/01/the-latest-on-the-digital-omnibus-on-ai-102momk">The Latest on the Digital Omnibus on AI</a></p></li><li><p><a href="https://epthinktank.eu/2026/04/01/ai-regulatory-sandboxes-state-of-play-and-implementation-challenges/">AI Regulatory Sandboxes: State of Play and Implementation Challenges</a></p></li><li><p><a href="https://ico.org.uk/media2/migrated/4029424/regulating-ai-the-icos-strategic-approach.pdf">ICO Report on Automated Decision-Making in Recruitment</a></p></li><li><p><a href="https://www.hunton.com/privacy-and-information-security-law/enforcement-of-colorado-ai-act-delayed-until-june-2026">Enforcement of Colorado AI Act Delayed Until June 2026</a></p></li><li><p><a href="https://leg.colorado.gov/bills/sb24-205">Colorado AI Act SB 24-205 Overview</a></p></li></ul><p></p><div><hr></div><h6><em><strong>DISCLAIMER</strong></em></h6><h6><em>The information is intended to be helpful but is in no way a substitute for seeking professional advice for your specific situation or intent. This applies to business, financial, legal, or other matters discussed herein. Please read the full <a href="https://aigovernanceplaybook.substack.com/p/disclaimer">DISCLAIMER</a></em></h6><h6><em><strong>Copyright Notice</strong></em></h6><h6><em>The information in this document is protected by international copyright laws. You may use the information for personal purposes but you are not permitted to duplicate it or distribute it to anyone else. This copy is for you only.</em></h6>]]></content:encoded></item><item><title><![CDATA[Act Now Brief | Monday 6 April 2026 ]]></title><description><![CDATA[The week's most urgent AI regulation developments across the USA, UK, and EU. Plain-English guidance for founders and operators &#8212; what to act on, what to watch, and what to ignore.]]></description><link>https://www.aigovernanceplaybook.com/p/ai-act-now-brief-monday-6-april-2026</link><guid isPermaLink="false">https://www.aigovernanceplaybook.com/p/ai-act-now-brief-monday-6-april-2026</guid><pubDate>Mon, 06 Apr 2026 14:20:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/00d8a6c5-df75-484d-b0a1-613462ed0c56_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>&#127482;&#127480; USA | FTC enforcement velocity and the shift in its enforcement posture</h3><p>Since last Monday, the FTC has clarified its AI enforcement approach. The agency is moving away from preemptive prohibition (its January stance on Rytr) and toward enforcement based on concrete consumer harm or actual deception. That means less risk if your product&#8217;s potential for misuse is hypothetical. More risk if you&#8217;re overstating what your AI does or hiding how you use user data. The FTC released its FY 2026-2030 Strategic Plan on April 3, making AI enforcement (including AI washing and data misuse) an explicit agency priority. The enforcement vehicle is Section 5 of the FTC Act: unfair or deceptive practices. OkCupid&#8217;s settlement from last week is still the precedent. Data sharing without consent is prosecutable. AI capability claims that are not substantiated are prosecutable.</p><h4><strong>So what?</strong></h4><p><strong>If you&#8217;re building AI products:</strong> If you make performance or capability claims about your AI, they need substantiation. Run your marketing copy through the lens of &#8220;could the FTC argue this is deceptive?&#8221; If you share user data with third parties for training, explicit consent and disclosure are not optional.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use third-party AI vendors, ask them directly: how do they handle data security and consent? If you cannot get a clear answer, assume the vendor is exposed, and you may be liable downstream.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients the FTC is now enforcing. The shift from Rytr is tactical, not strategic. The agency still owns AI enforcement. Overstated claims and data misuse are live issues. Vet your marketing and data handling before the FTC comes to you.</p><div><hr></div><h3>&#127466;&#127482; EU | August 2 remains the immovable deadline. Build for it.</h3><p>No change since last Monday. The Digital Omnibus didn&#8217;t pass on March 26. The August 2, 2026 deadline for high-risk AI Act compliance stands. Four months remain. EU member states are still designating enforcement authorities (only 8 of 27 so far), but enforcement will happen when capacity arrives. The practical implication: assume the deadline holds. If your AI system qualifies as high-risk under Annex III (hiring, lending, criminal justice, education, benefits decisions), you must complete conformity assessment, technical documentation, CE marking, and national database registration by August 2. If you haven&#8217;t started, you&#8217;re months behind.</p><h4><strong>So what?</strong></h4><p><strong>If you&#8217;re building AI products:</strong> If you ship high-risk AI into the EU, your compliance scope and vendor capacity must be locked by mid-June. Run your conformity assessment gap analysis this week. If you&#8217;re relying on a vendor to complete this work, confirm their timeline now.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you deploy high-risk AI in the EU, the vendor&#8217;s compliance is not sufficient. You must perform impact assessments, maintain risk documentation, and ensure consumer notice. Start this work now. Don&#8217;t wait for the vendor.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients the August 2 deadline is real. The Digital Omnibus delay means no extension. Build a 16-week backwards timeline from August 2. You need the scope defined by mid-April, vendor capacity confirmed by May 1, and testing complete by June 15.</p><div><hr></div><h3>&#127468;&#127463; UK | Documentation now. Enforcement later.</h3><p>Still no new enforcement actions. The ICO is in the engagement phase with foundation model developers, collecting information about data handling, bias testing, and impact assessment practices. When the ICO moves to enforcement (expected later this year or in 2027), your documented practices will be the evidence of compliance or noncompliance. Have bias testing results. Have data processing records. Have impact assessments. Have risk documentation. Have transparency disclosures. If the ICO asks and you can&#8217;t produce these artifacts, you&#8217;ve moved from a compliance question to an enforcement liability.</p><h4><strong>So what?</strong></h4><p><strong>If you&#8217;re building AI products:</strong> Document your model development, testing, bias assessment, and risk mitigation now. These artifacts shield you from enforcement. Don&#8217;t wait for the ICO to ask.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use AI for hiring, benefits, or automated decisions, run your documentation audit this week. The ICO will ask. Be ready to show it.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients the ICO enforcement is coming. The window to build your documentary evidence is closing. Do it now. After the ICO starts enforcement cases, documentation is discovery, not defense.</p><div><hr></div><h3>&#127482;&#127480; USA | White House AI policy framework is now law (sort of). Congress is still stalled.</h3><p>On March 20, the White House released its National Policy Framework for Artificial Intelligence, calling on Congress to turn it into legislation by the end of 2026. The framework proposes federal preemption of state AI laws that impose &#8220;undue burdens&#8221; on developers and general-purpose systems, while carving out child safety, fraud prevention, state procurement, and infrastructure. This is a signal. Congress has already rejected preemption multiple times (in the One Big Beautiful Bill Act and the defense bill). The framework is persuasive, not binding. What this means for you: assume state AI laws will stay. The preemption battle will play out in courts and in Congress, not on your compliance calendar. Build for California, Colorado, and Oregon. Don&#8217;t assume a federal framework will wash away state obligations.</p><h4><strong>So what?</strong></h4><p><strong>If you&#8217;re building AI products:</strong> You&#8217;re now operating in a fragmented state legal landscape. Assume Colorado (June 30), California (2026 onwards), and Oregon (January 1, 2027) compliance requirements will remain. Build your product roadmap against the strictest rule.</p><p><strong>If you&#8217;re using AI in your business:</strong> If your business spans multiple states, plan for different compliance requirements in each. Ask your vendors whether they&#8217;re building for state-by-state compliance or betting on federal preemption. If they&#8217;re betting on preemption, plan to rework your setup if Congress doesn&#8217;t act.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients the White House framework is not a relief valve. Congress has rejected preemption before and may do so again. Advise clients that state law compliance is not optional and may be mandatory even after federal legislation passes.</p><div><hr></div><h2>&#128993; Heads up</h2><h3>&#127482;&#127480; USA | California AB 1883 still advancing. Three separate workplace AI regulations are now in motion.</h3><p>Still no new movement since last Monday. AB 1883 (workplace surveillance and bias inference restrictions) passed committee 5-0 on March 19 and continues through the legislature. This is distinct from AB 1898 (notice and disclosure) and SB 947 (automated hiring decisions). If you build HR tech, employee monitoring, hiring tools, or worker scoring systems, California is creating three separate compliance regimes for your product. The patchwork is expanding, and these bills create private rights of action (class action risk). Plan for all three to pass. Don&#8217;t rely on any single veto or legislative failure.</p><h4><strong>So what?</strong></h4><p><strong>If you&#8217;re building AI products:</strong> If you build HR, hiring, or employee monitoring AI, assume all three California bills will pass. Audit your product against notice requirements (AB 1898), automated hiring restrictions (SB 947), and bias/surveillance prohibitions (AB 1883).</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use AI for hiring or performance management in California, your vendors may face new legal restrictions. Start asking vendors what they&#8217;re doing to comply with these three bills.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients California is moving toward comprehensive regulation of workplace AI. Build for all three bills. Class action exposure is real if you miss any of them.</p><div><hr></div><h3>&#127482;&#127480; USA | Colorado AI Act enforcement is three months away. High-risk systems in employment, housing, credit, and healthcare must comply by June 30, 2026.</h3><p>Still on track. No change since last Monday. The Colorado AI Act will enforce on June 30, 2026. Developers must exercise reasonable care, provide technical documentation, and publish statements. Deployers must adopt risk policies, run impact assessments, and issue consumer notices. Violations are consumer protection violations (up to $20,000 per violation, depending on impact). If you operate in Colorado or serve Colorado customers with high-risk AI, your June 30 deadline is in 12 weeks.</p><h4><strong>So what?</strong></h4><p><strong>If you&#8217;re building AI products:</strong> If your system is high-risk under the Colorado Act, confirm your technical documentation and public statement by May 15. You need a full month before June 30 to fix any gaps.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you deploy high-risk AI in Colorado, your impact assessments and consumer notices must be in place by June 30. Start the audit now.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients Colorado is the first nationwide enforcement. Get compliant by June 30. After that date, noncompliance exposes you to state AG enforcement and class action risk.</p><div><hr></div><h2>&#128994; On the radar</h2><ul><li><p><strong>FTC&#8217;s AI washing enforcement is accelerating:</strong> The agency brought a dozen AI washing cases in 2025, targeting overstated capability claims. Expect more enforcement in 2026. If your marketing says your AI does something, have the data to prove it.</p></li><li><p><strong>State deepfake laws are multiplying:</strong> 46 states now have deepfake legislation (as of early 2026). Montana and South Dakota require election deepfake disclosures. Expect broader coverage to include platform liability and payment processor requirements as lawmakers broaden their approach beyond creators.</p></li><li><p><strong>California&#8217;s AI youth mental health bill (SB 1181) had a hearing April 6:</strong> Bill is moving through the legislature. Will likely regulate AI systems designed to influence youth engagement or mental health. Watch for impact on consumer AI and social platforms.</p></li><li><p><strong>EU AI Office code of practice on transparency is in development:</strong> Expect final publication before August 2, 2026. Will provide guidance on transparency, labeling, and disclosure for general-purpose AI systems and foundation models.</p></li><li><p><strong>White House DOJ task force is still litigating state AI laws on constitutional grounds:</strong> This will take years. Build for state compliance now. Don&#8217;t assume federal preemption arguments will succeed in court.</p></li></ul><div><hr></div><h2>The one thing to do this week</h2><p>Audit your marketing copy and data handling against the FTC&#8217;s Section 5 enforcement standard. For each AI capability claim you make, ask: Do I have substantiation data? For each dataset or user data point you touch, ask: Is there explicit consent and full disclosure of how the data is used and who it&#8217;s shared with? If you can&#8217;t answer yes to either question, fix it this week before the FTC comes to you.</p><div><hr></div><h2>Deadline tracker</h2><p><strong>&#127466;&#127482; EU</strong> | High-risk AI systems must comply with EU AI Act (conformity assessment, CE marking, national registration) | August 2, 2026 | Four months away. No extension. Assume the deadline holds.</p><p><strong>&#127482;&#127480; USA</strong> | Colorado AI Act enforcement begins (reasonable care, documentation, consumer notice) | June 30, 2026 | Three months away. High-risk systems in employment, housing, credit, and health.</p><p><strong>&#127482;&#127480; USA</strong> | Oregon SB 1546 and Washington HB 2225 take effect (AI companion chatbots, disclosure requirements) | January 1, 2027 | Nine months away. Oregon has  private right of action ($1,000 per violation).</p><p><strong>&#127482;&#127480; USA</strong> | California AB 1883 (workplace surveillance and bias inference restrictions) | TBD 2026 | Passed committee 5-0 March 19. High likelihood of passage.</p><p><strong>&#127482;&#127480; USA</strong> | California AB 1898 and SB 947 (AI hiring and employment notice/disclosure requirements) | TBD 2026 | Both advancing through legislature. Private right of action in both.</p><p><strong>&#127468;&#127463; UK</strong> | ICO finishes engagement with 11 foundation model developers | TBD 2026 | Findings will inform enforcement priorities. Expect enforcement cases to follow later in 2026 or 2027.</p><p><strong>&#127468;&#127463; UK</strong> | UK AI Bill introduced (expected after May King&#8217;s Speech) | TBD 2026 | Awaiting legislative timetable. Will provide a statutory foundation for AI regulation.</p><div><hr></div><p>Sources:</p><ul><li><p><a href="https://www.morganlewis.com/pubs/2026/04/ai-enforcement-accelerates-as-federal-policy-stalls-and-states-step-in">AI Enforcement Accelerates as Federal Policy Stalls and States Step In</a></p></li><li><p><a href="https://natlawreview.com/press-releases/ftc-brings-dozen-ai-washing-enforcement-cases-2025-targeting-overstated-ai">FTC Brings Dozen AI-Washing Enforcement Cases in 2025, Targeting Overstated AI Claims</a></p></li><li><p><a href="https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20260323-white-house-releases-national-policy-framework-for-artificial-intelligence">The White House National Policy Framework for Artificial Intelligence</a></p></li><li><p><a href="https://www.ropesgray.com/en/insights/alerts/2026/03/the-white-house-legislative-recommendations-national-policy-framework-for-artificial-intelligence-an">White House Legislative Recommendations on AI Preemption of State Laws</a></p></li><li><p><a href="https://artificialintelligenceact.eu/implementation-timeline/">EU AI Act Implementation Timeline</a></p></li><li><p><a href="https://asanify.com/blog/news/ai-hiring-compliance-deadline-april-4-2026/">AI News Digest: The AI Hiring Compliance Deadline Just Got Complicated</a></p></li><li><p><a href="https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/">Guidance on AI and Data Protection (ICO)</a></p></li><li><p><a href="https://openclawd.in/uk-ai-regulation-news-today-2026-bills-laws/">UK AI Regulation News Today 2026: Bills, Laws &amp; Businesses</a></p></li><li><p><a href="https://natlawreview.com/article/new-california-ai-laws-taking-effect-2026">New California AI Laws Taking Effect in 2026</a></p></li><li><p><a href="https://www.transparencycoalition.ai/news/ai-legislative-update-april3-2026">AI Legislative Update: April 3, 2026</a></p></li><li><p><a href="https://www.hunton.com/privacy-and-cybersecurity-law-blog/enforcement-of-colorado-ai-act-delayed-until-june-2026">Enforcement of Colorado AI Act Delayed Until June 2026</a></p></li><li><p><a href="https://www.nbcnews.com/politics/politics-news/2026-new-laws-states-elections-midterms-ai-obamacare-aca-paid-leave-rcna247602">New laws in 2026 target AI and deepfakes</a></p></li><li><p><a href="https://www.citizen.org/article/tracker-intimate-deepfakes-state-legislation/">State Legislation on Deepfakes (Public Citizen Tracker)</a></p></li><li><p><a href="https://iapp.org/news/a/iapp-global-summit-2026-ftc-commissioner-meador-stresses-agency-preference-for-case-by-case-enforcement/">IAPP Global Summit 2026: FTC Commissioner Meador on Enforcement Approach</a></p></li><li><p><a href="https://ppc.land/ftcs-2026-2030-plan-puts-big-tech-kids-data-and-ad-fraud-in-the-crosshairs/">FTC&#8217;s 2026-2030 Strategic Plan</a></p></li></ul><div><hr></div><h6><em><strong>DISCLAIMER</strong></em></h6><h6><em>The information is intended to be helpful but is in no way a substitute for seeking professional advice for your specific situation or intent. This applies to business, financial, legal, or other matters discussed herein. Please read the full <a href="https://aigovernanceplaybook.substack.com/p/disclaimer">DISCLAIMER</a></em></h6><h6><em><strong>Copyright Notice</strong></em></h6><h6><em>The information in this document is protected by international copyright laws. You may use the information for personal purposes but you are not permitted to duplicate it or distribute it to anyone else. This copy is for you only.</em></h6>]]></content:encoded></item><item><title><![CDATA[What's Building | Thursday 2 April 2026]]></title><description><![CDATA[A forward-looking briefing on AI regulation across the USA, UK, and EU. What's moving through legislative pipelines right now, and what it means for your business in the next three to six months.]]></description><link>https://www.aigovernanceplaybook.com/p/ai-whats-building-thursday-2-april-2026</link><guid isPermaLink="false">https://www.aigovernanceplaybook.com/p/ai-whats-building-thursday-2-april-2026</guid><pubDate>Thu, 02 Apr 2026 14:46:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1ccdd8dc-13b0-42a4-8327-caa8959c8bcb_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>On the watchlist</h2><p><em>Things moving through regulatory pipelines that will matter in the next 3 to 6 months</em></p><h3>&#127482;&#127480; USA | Federal preemption and the state law showdown</h3><p><strong>Status: &#128308; Watch &#8212; The White House just picked a fight it may lose, and your state laws are now in legal limbo</strong></p><h3>What is happening</h3><p>As we covered last week, the White House released a National Policy Framework on 20 March 2026 proposing broad federal preemption of state AI laws. The framework directs the Attorney General to challenge state laws in federal court, particularly those requiring &#8220;alterations to truthful outputs&#8221; of AI models or imposing liability on developers. Since then, Congress has rejected comparable preemption in the National Defense Authorization Act, and states have pushed back hard. </p><p>An NPR analysis from 28 March shows states refusing to back down despite the Trump administration&#8217;s preemption push, with Colorado, California, and Texas already in the litigation pipeline. The Commerce Department report flagging burdensome state laws remains the administration&#8217;s foundation for legal challenges, but the statutory framework for preemption is weaker than it was when the brief was filed.</p><h3>Why founders should care</h3><p>This battle is still moving in slow motion, but state laws aren&#8217;t freezing in place while it plays out. Colorado&#8217;s impact assessment deadline moves to June 2026. New York City&#8217;s bias audit requirement is already live. California just added a new requirement (see below). The federal government is betting it can use Dormant Commerce Clause theory to kill state laws in federal court, but Congress has already rejected this framing. If you operate across state lines, assume state laws are binding and budget for them accordingly. The preemption fight isn&#8217;t a &#8220;wait and see&#8221; situation for your product roadmap.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> If you sell across state lines, don&#8217;t wait for federal preemption to succeed. Colorado&#8217;s impact assessment deadline is June 2026, California is adding new requirements (effective 1 July 2026), and NYC is live now. Build for the hardest state you operate in, starting with Colorado by May 2026.</p><p><strong>If you&#8217;re using AI in your business:</strong> State laws still apply to how you use AI in hiring, lending, and employment decisions. If you operate in Colorado, NYC, or California, audit your systems against their specific requirements (impact assessments, bias audits, and advance notice to candidates). Do this by May 2026 for Colorado and California deadlines.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients that federal preemption is uncertain but state law is certain. Help them audit which states they operate in, what those states require, and what the actual deadline is for each. If they&#8217;re using AI for hiring or lending in multiple states, they need a compliance map by May 2026.</p><h3>Who feels this most</h3><ul><li><p><strong>HR / Hiring:</strong> New York City requires bias audits and advance notice now. Illinois allows candidates to sue directly if they believe they were discriminated against via AI hiring tools, without filing a complaint first. Colorado&#8217;s impact assessment deadline is June 2026. California&#8217;s new AI executive order (below) extends to hiring tools. Multi-state hiring tools must comply with all of these. Start with NYC and Colorado.</p></li><li><p><strong>Financial Services:</strong> Multi-state lenders must assume state AI laws remain binding. The Federal Reserve updated model risk management guidance for AI systems in February, and state attorneys general are watching for discrimination in credit scoring. If you approve loans across state lines, assume you need impact assessments in Colorado, California, and any state where you originate credit.</p></li><li><p><strong>Retail &amp; AdTech:</strong> Pricing and advertising algorithms are not yet explicitly regulated at state level, but California&#8217;s new executive order focuses on government procurement standards. If you sell to California state agencies, this affects you directly.</p></li></ul><div><hr></div><h3>&#127482;&#127480; USA | California expands AI government procurement standards</h3><p><strong>Status: &#128993; Watch &#8212; California just made AI companies&#8217; lives harder if they want to sell to the state</strong></p><h3>What is happening</h3><p>On 31 March 2026, California Governor Gavin Newsom signed Executive Order N-5-26, directing state agencies to develop new standards for AI companies seeking to contract with the state. The order requires agencies to create procurement requirements for AI systems, including responsible use guidelines for generative AI in government operations. This applies to any AI product sold to or used by California state agencies, from HR tools to data analytics platforms. The executive order also directs state agencies to expand responsible use of generative AI internally, with guidelines on transparency and accountability. </p><p>Unlike state AI laws (which face federal preemption challenges), procurement standards are a different animal legally. They function as a market requirement, not a regulation. If you want California&#8217;s business, you&#8217;ll need to meet their procurement standards.</p><h3>Why founders should care</h3><p>This isn&#8217;t a regulation yet. It&#8217;s a procurement requirement, which means it only applies if you want to sell to California state agencies. But California is a massive buyer of software and services. If your product is even tangentially AI-related and you want California state business, you need to pay attention. The standards are still being written (agencies have been directed to develop them), so the exact requirements aren&#8217;t yet public. But Newsom&#8217;s focus is on transparency, responsible use, and safety. </p><p>If you&#8217;re selling hiring, analytics, or decision-making tools to California agencies, expect questions about how your system works, how you test for bias, and what safeguards you have in place. This will also set a precedent for other states to do the same.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> If you sell to government agencies in California (or other states following its lead), start tracking what procurement standards are being written. Newsom&#8217;s order directs agencies to create standards, but the final requirements aren&#8217;t public yet. Monitor the California government procurement office for updates starting in Q2 2026.</p><p><strong>If you&#8217;re using AI in your business:</strong> If your state or local government is using AI, this doesn&#8217;t directly affect you unless you&#8217;re selling to them. If you are, you&#8217;ll need to meet new procurement standards. For most small teams, this is a low priority unless you&#8217;re a govtech company.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients selling to government agencies in California or other states that procurement standards are becoming a compliance requirement alongside regulation. These are often stricter than regulations because government buyers can set their own requirements. If a client sells hiring, analytics, or decision-making tools to the government, they should start planning for procurement requirements now.</p><h3>Who feels this most</h3><ul><li><p><strong>HR / Hiring:</strong> California state agencies use hiring tools. If you sell or provide these tools to the California government, you&#8217;ll need to meet new procurement standards. Most will focus on bias testing, transparency, and non-discrimination safeguards.</p></li><li><p><strong>Retail &amp; AdTech:</strong> Pricing and targeting algorithms used by California state agencies (if any) will be subject to procurement review. Most AdTech does not sell directly to the government, so this is low priority unless you work in the govtech space.</p></li></ul><div><hr></div><h3>&#127462;&#127466; FTC&#8217;s Section 5 enforcement playbook is live</h3><p><strong>Status: &#128993; Watch &#8212; The FTC just gave itself a blank check to enforce AI practices you may not know are illegal</strong></p><h3>What is happening</h3><p>As we noted last week, the Federal Trade Commission published its Policy Statement on AI and Section 5 of the FTC Act on 11 March 2026. The statement doesn&#8217;t introduce new rules but interprets the 100-year-old Section 5 prohibition on unfair and deceptive practices to apply directly to AI systems. The FTC signals enforcement priorities: algorithmic discrimination, deceptive AI-generated content, privacy violations tied to AI data collection, non-transparent automated decision-making, and false or misleading claims about AI safety. Since the policy dropped, the FTC has been laying the groundwork for enforcement cases. No major enforcement actions have been filed yet, but the targeting is explicit.</p><h3>Why founders should care</h3><p>If you generate content, use AI to make decisions about people, train models on data you collected without clear consent, or claim your AI is safe or unbiased, you&#8217;re in the FTC&#8217;s perimeter. The Section 5 standard is broad, which gives the FTC room to act but also means the boundaries aren&#8217;t crystal clear. Expect enforcement action in the next 6 to 12 months, particularly against companies making safety or fairness claims they can&#8217;t substantiate. The FTC&#8217;s approach is to use existing authority rather than wait for new legislation. This is faster and gives the agency more discretion.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> Audit your content generation, disclosure practices, and data sourcing now. The FTC will challenge any claim about AI that you can&#8217;t prove. If you generate images, text, or audio, label it. If you make safety claims, document them. Document data sourcing and user consent for training. Do this audit by May 2026.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use AI to screen candidates, approve credit, detect fraud, or make decisions affecting people, document that you&#8217;ve tested the system for discrimination. The FTC is enforcing anti-discrimination law through the AI lens. If you don&#8217;t have evidence that your system doesn&#8217;t discriminate, fix it.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients that the FTC will challenge any AI claim they can&#8217;t prove. Treat AI systems like consumer products: disclosed features, tested safety claims, transparent decision-making. If a client makes any claim about fairness or safety, they need documentation to back it up.</p><h3>Who feels this most</h3><ul><li><p><strong>Media &amp; Publishing:</strong> If you generate images, deepfakes, or synthetic media, the FTC is now explicitly watching for deceptive labeling. Label all generated content clearly. Don&#8217;t make unlabeled deepfake ads.</p></li><li><p><strong>Financial Services:</strong> The FTC and Consumer Financial Protection Bureau are coordinating on algorithmic discrimination in lending. If you use AI to score credit, price loans, or detect fraud, document non-discrimination testing.</p></li><li><p><strong>HR / Hiring:</strong> If you use AI to screen resumes, conduct interviews, or rank candidates, the FTC is watching for discrimination. Documentation of fairness testing is now your first line of defense.</p></li></ul><div><hr></div><h3>&#127466;&#127482; EU | Digital Omnibus trilogue accelerating, and member states must launch sandboxes by August</h3><p><strong>Status: &#128308; Watch &#8212; EU AI Act timelines just locked in, and compliance deadlines are now fixed through 2028. Member states must open regulatory sandboxes by 2 August 2026</strong></p><h3>What is happening</h3><p>As covered last week, trilogue negotiations began after Parliament confirmed its position on 26 March. The next trilogue session is scheduled for 28 April 2026, with reports indicating political negotiations are already intense. Separately, each EU member state is required by the AI Act to establish at least one AI regulatory sandbox by 2 August 2026. The sandbox is a controlled environment where AI systems can be tested with regulatory guidance before market release. </p><p>The Commission published draft implementing guidance in December 2025 and requested feedback by January 2026. Sandboxes are optional for companies (you don&#8217;t have to use one), but they provide liability protection if you follow sandbox guidelines in good faith. For the first time, member states must announce their sandbox structures, hosting organizations, and application processes by mid-2026.</p><h3>Why founders should care</h3><p>Two separate pressures are now on schedule. First, the Digital Omnibus trilogue is moving fast. Agreements are expected by late April or May 2026, with final compliance deadlines confirmed: Annex III systems (hiring, credit, biometrics) by 2 December 2027, Annex I systems (foundational models) by 2 August 2028. These dates are now locked in across EU institutions and won&#8217;t change significantly in trilogue. Second, EU member states must have sandboxes open by 2 August 2026. </p><p>If you&#8217;re a founder building AI in Europe, you now have a choice: wait for sandboxes to open (August 2026) and test under modified rules, or start compliance work now under the standard AI Act regime. Sandboxes offer liability protection if you follow their guidance, but they require approval and participation. For most small teams, waiting for your national sandbox to open makes sense. For teams operating across multiple EU countries, you can pick the sandbox with the best terms for your use case.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> If you sell to the EU and your product makes decisions about hiring, credit, or other high-risk use cases, your hard deadline is 2 December 2027 (16 months away). Check whether your member state has announced its sandbox yet (most will by July 2026). If your system qualifies, applying for sandbox participation in Q3 2026 gives you liability protection during the compliance period. If not, start conformity assessment planning now.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use AI in the EU for hiring, lending, or decisions affecting people, assume you&#8217;re subject to the AI Act as a user. You may need to ensure the systems you buy are compliant by December 2027. Track whether your AI vendor has a compliance plan.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients with EU exposure that December 2027 is now a locked-in deadline across EU institutions. No extension is coming. If they sell HR, credit, or insurance products to the EU, they have 16 months and should start conformity assessment immediately. Sandbox participation (launching August 2026) is optional but offers liability protection. Help clients understand which member state&#8217;s sandbox is relevant to their use case.</p><h3>Who feels this most</h3><ul><li><p><strong>HR / Hiring:</strong> Your EU deadline is 2 December 2027. Hiring tools are explicitly high-risk. Start documentation, fairness testing, and impact assessment immediately. Most member states will announce their sandboxes by July 2026. If your system is hiring-focused, apply for sandbox participation by Q4 2026.</p></li><li><p><strong>Financial Services:</strong> Credit scoring and loan approval are high-risk. Same 2 December 2027 deadline. Multi-state banks should assume this deadline is non-negotiable.</p></li><li><p><strong>Healthcare:</strong> Diagnostic and treatment recommendation systems are likely high-risk. If you sell to EU hospitals, classify your system immediately. High-risk systems must comply by December 2027.</p></li></ul><div><hr></div><h3>&#127468;&#127463; UK | ICO publishes report and consultation on automated hiring decisions</h3><p><strong>Status: &#128993; Watch &#8212; The UK has moved from hands-off to hands-on on hiring AI, with a consultation open until 29 May 2026</strong></p><h3>What is happening</h3><p>On 31 March 2026, the Information Commissioner&#8217;s Office published a report on automated decision-making in recruitment, based on evidence from over 30 employers and public perception research. The report finds that many employers don&#8217;t acknowledge they&#8217;re using automated decision-making, and therefore fail to put in place safeguards like transparency, bias monitoring, and accountability measures. The ICO is consulting on draft guidance until 29 May 2026. </p><p>The key finding: most employers think they&#8217;re using decision support (human-in-the-loop), but evidence shows they&#8217;re using automated decision-making with no meaningful human involvement. The ICO expects meaningful human involvement to be genuine and active, not a rubber stamp. The consultation covers data protection impact assessments, bias testing, transparency with candidates, and recourse rights.</p><h3>Why founders should care</h3><p>This signals a major shift in UK regulatory approach. The government previously favored a &#8220;pro-innovation, non-statutory&#8221; stance on workplace AI. This report ends that. The ICO is now saying that if you use AI to make hiring decisions in the UK, you must test for bias, be transparent with candidates, and give them a genuine right to human review. The consultation is open until 29 May, which means the final guidance will land in Q2-Q3 2026. </p><p>If you sell hiring tools to the UK or use AI for hiring in the UK, this is your compliance roadmap for the next 12 months. The ICO&#8217;s tone is notably stricter than previous guidance. This isn&#8217;t a suggestion. It&#8217;s an enforcement direction.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> If you sell hiring tools to the UK, assume the ICO&#8217;s final guidance (landing mid-2026) will become your compliance floor. You&#8217;ll need: bias testing (ideally monthly), transparency features (tell candidates how the system works), and genuine human review rights (not a rubber stamp). Start building these features now if you don&#8217;t have them. The consultation closes on 29 May, so the final guidance lands in Q3 2026.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use AI to screen candidates in the UK, audit your system now. Do you have evidence of bias testing? Can candidates see how you screened them? Can they request human review? If the answer to any of these is no, fix it by May 2026.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell UK-based clients that the ICO has moved from advisory to enforcement mode on hiring AI. The consultation (closes 29 May) is your chance to shape the final rules. If your clients build hiring tools or use them to screen candidates, respond to the consultation if you want to influence the outcome. Final guidance lands in Q3 2026.</p><h3>Who feels this most</h3><ul><li><p><strong>HR / Hiring:</strong> This is directly about you. The ICO expects meaningful human involvement, regular bias testing, and transparent communication with candidates. If you use AI to screen resumes or conduct interviews in the UK, document your bias testing and human review process now. The consultation closes on 29 May. If you build hiring tools, respond to the consultation.</p></li></ul><div><hr></div><h3>&#127468;&#127463; UK | Copyright and AI policy remains frozen, but the door is cracking open</h3><p><strong>Status: &#128993; Watch &#8212; The UK government backed away from its own opt-out proposal, which means copyright and AI in the UK is now a live policy question with no clear answer</strong></p><h3>What is happening</h3><p>As we covered last week, the UK Department for Science, Innovation and Technology published its response to a copyright and AI consultation on 19 March 2026. The government abandoned its previous opt-out proposal (allow AI companies to train on copyright works, but let rights holders opt out) after overwhelming rejection by creative industries. The government now has no preferred policy and is adopting a &#8220;wait and see&#8221; approach, monitoring litigation (Getty Images v Stability AI in the US, similar cases in the UK), international developments, and market dynamics. No new progress this week, but the policy framework remains open.</p><h3>Why founders should care</h3><p>The UK copyright question is genuinely unresolved. The government has rejected its own best idea and is waiting for courts and market forces to resolve the issue. If you&#8217;re training models on copyright material, you&#8217;re technically violating UK copyright law (there&#8217;s no AI-specific exception for training). The government isn&#8217;t enforcing this aggressively right now, but that could change if courts rule against AI companies. For now, you have a grace period, but it&#8217;s not a long one. By late 2026 or 2027, the government is likely to revisit this question, possibly with a different answer based on court decisions.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> If you train generative models on copyright material and serve UK customers, you&#8217;re technically not covered by a copyright exception (unlike some EU countries). You&#8217;re in a legal gray zone. The government isn&#8217;t enforcing it now, but courts may clarify by late 2026. Monitor the Getty case and any UK copyright litigation.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use generative AI trained on copyright material (almost all large models do this), understand that the UK government&#8217;s position is &#8220;we don&#8217;t know yet.&#8221; This is lower risk than the EU AI Act, but not zero risk. No action needed now, but track court cases.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients with UK copyright exposure that the government has paused reform and is waiting for courts and international developments. This buys time, but it&#8217;s not a permanent solution. Copyright risk in the UK is lower than in the EU but higher than in the US.</p><h3>Who feels this most</h3><ul><li><p><strong>Media &amp; Publishing:</strong> If you train AI models on published works, the UK government&#8217;s wait-and-see approach means you&#8217;re in a policy limbo that could swing either way. The EU is negotiating stronger copyright protections for creators (as part of the Digital Omnibus trilogue); the UK is waiting. If you&#8217;re a model builder, the UK is currently less restrictive than the EU, but that could change.</p></li></ul><div><hr></div><h3>&#127468;&#127463; UK | Children&#8217;s online safety regulations now include AI obligations</h3><p><strong>Status: &#128993; Watch &#8212; A consultation launched on 2 March with tough new rules for AI chatbots and generative AI services, open for comment until 26 May</strong></p><h3>What is happening</h3><p>As covered last week, the UK Department for Science, Innovation and Technology launched a consultation on 2 March 2026 called &#8220;Growing up in the online world: a national conversation.&#8221; The consultation includes explicit obligations for AI chatbots and generative AI services. Proposed measures include stronger age assurance mechanisms, a potential statutory minimum age for social media, raising the UK&#8217;s age of digital consent (currently 13), restrictions on features like livestreaming and disappearing messages, and new obligations for AI chatbots and generative AI to protect children. </p><p>The consultation closes on 26 May 2026. On 12 March, the ICO and communications regulator Ofcom issued coordinated letters to social media and video platforms demanding compliance action beyond self-declaration.</p><h3>Why founders should care</h3><p>If you&#8217;re building a chat product, a generative AI tool accessible to children, or any service that might attract users under 16, this consultation is directly about you. The UK is moving toward age-gating and content safety obligations for AI. The consultation is broad, which means the final rules could be narrower or broader depending on feedback. </p><p>For now, treat this as a signal: UK child safety regulation for AI is coming, probably within 12 to 18 months after consultation closes. The consultation closes 26 May, so you have time to respond if your product affects children.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> If your AI product is accessible to anyone under 16 in the UK, think now about age verification and age-appropriate safeguards. The consultation closes 26 May. If you have UK customers, respond to the consultation if you want to shape the final rules. Implementation timeline is likely 2027 at the earliest.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use AI chatbots or generative AI services in the UK, track this consultation. If final rules require age verification or content filtering, you&#8217;ll need to ensure your AI provider complies. Most small teams, this is low priority right now. Watch for final guidance in Q3 2026.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell UK-based clients that child safety regulation for AI is coming. The consultation (closes 26 May) is your window to shape it. If your product serves children or could serve children, respond to the consultation. Final rules likely land in late 2026 or early 2027.</p><h3>Who feels this most</h3><ul><li><p><strong>EdTech:</strong> If you build educational AI products used by students under 16 in the UK, this consultation is directly about you. Age verification and content safety will be mandatory. Start thinking now about how to implement age checks and protect minors from inappropriate outputs.</p></li></ul><div><hr></div><h2>Probably noise</h2><p><em>Developments unlikely to affect founders in the next 12 months, and why</em></p><ul><li><p><strong>&#127466;&#127482; EU | GPAI Code of Practice finalized:</strong> The EU AI Office published the final GPAI (general-purpose AI) Code of Practice on 5 March 2026. This is soft law, not binding, and sets out voluntary best practices for AI model providers on transparency, copyright, and safety testing. It&#8217;s not an enforcement mechanism for small teams right now. If you&#8217;re a model provider (building Claude-like systems), pay attention. If you&#8217;re using AI in your business or building customer products, this is background noise for now.</p></li><li><p><strong>&#127466;&#127482; EU | EU member states report on AI sandboxes:</strong> Over the past week, several EU member states announced their sandbox implementations. Germany, France, Spain, and others are opening regulatory sandboxes for AI testing. This is good news for founders but does not create new compliance obligations. Sandbox participation is optional. If you operate in a specific EU country and want liability protection, check that country&#8217;s sandbox announcement. Most announcements will be complete by July 2026.</p></li><li><p><strong>&#127468;&#127463; UK | AI Growth Lab sandboxes delayed:</strong> The UK DSIT opened consultation on its AI Growth Lab (regulatory sandboxes) in October 2025, with responses due 2 January 2026. No new updates in the past week. This remains a policy proposal with no confirmed timeline. It&#8217;s interesting but not actionable yet. Revisit if the government announces a launch date.</p></li></ul><div><hr></div><h2>The pattern this week</h2><p>The regulation wave is accelerating across all three jurisdictions, and the pattern is clear: governments are moving from &#8220;should we regulate?&#8221; to &#8220;here is how you comply.&#8221; The UK ICO just published enforcement expectations for hiring AI. California just added government procurement standards. The EU has locked in compliance deadlines for 16 months from now. The US is fighting in court over state laws while the FTC uses existing authority to enforce AI practices. None of these developments are stopping. All of them are moving toward enforcement by mid-2026.</p><p>The most important shift this week is the UK&#8217;s move from advisory to enforcement on hiring AI. The ICO report signals that the government&#8217;s hands-off period on workplace AI is over. The consultation until 29 May gives companies a window to shape the final rules, but the direction is set: meaningful human involvement, bias testing, and transparency. This is the same pattern we saw in the EU with the AI Act and in US states with impact assessments. The regulatory floor is rising everywhere.</p><p>California&#8217;s government procurement order is a model that other states will likely copy. Procurement standards are harder to challenge legally than regulations (they are not rules about how commerce works, they are rules about how the government buys software), and they create a market incentive for AI companies to meet safety and transparency standards. Expect more states to do this.</p><p>The EU&#8217;s sandbox requirement (due 2 August 2026) creates a test environment where small teams can get liability protection if they follow sandbox guidelines. This is valuable, but it&#8217;s also a signal that the EU is serious about enforcement. Sandboxes exist so companies have a place to fail safely. Once sandboxes are open, there won&#8217;t be any excuse for not knowing the rules.</p><div><hr></div><h2>Sector getting the most heat this week</h2><p><strong>Sector: HR / Hiring</strong></p><p>You&#8217;re now in the regulatory crosshairs across all three jurisdictions, and the pace has accelerated this week. In the US, New York City&#8217;s bias audit requirement is live, Illinois allows direct lawsuits, and Colorado&#8217;s deadline is June 2026. The FTC is explicitly watching hiring AI for discrimination. In the UK, the ICO just published a report saying most hiring AI lacks meaningful human involvement and is demanding change by the time its guidance finalizes in Q3 2026. In the EU, hiring tools are high-risk, and your compliance deadline is 2 December 2027. If you build, sell, or use AI for hiring, your 2026 is now allocated to compliance work.</p><p>The ICO&#8217;s report this week is the most notable. It found that 80% of employers think they&#8217;re using decision support (human-in-the-loop), but evidence shows they&#8217;re using fully automated decisions with no real human involvement. The ICO is now saying that rubber-stamp review doesn&#8217;t count as human involvement. A human must have the ability to actively influence the decision before it&#8217;s applied. This is a higher standard than most hiring teams currently meet. If you use AI to screen candidates in the UK, document your human review process now. If it&#8217;s a rubber stamp, redesign it.</p><p>If you sell hiring tools, assume all jurisdictions are now on a tighter compliance timeline. New York City requires bias audits now. Colorado requires impact assessments by June. The UK expects meaningful human involvement and bias testing by the time guidance finalizes in Q3. The EU has until December 2027, but that is only 16 months away. Start with whatever deadline is closest to you. If you sell to multiple jurisdictions, pick the hardest standard and build for that. You will be compliant everywhere else.</p><div><hr></div><h2>Dates to put in your calendar</h2><p><strong>11 March 2026</strong> | USA | FTC Policy Statement on AI and Section 5 published. The FTC is now actively enforcing AI practices under existing authority. Review your disclosure practices, data sourcing, and any safety claims you make.</p><p><strong>31 March 2026</strong> | UK | ICO publishes report on automated decision-making in recruitment. The UK&#8217;s hands-off period on hiring AI is over. If you use or sell hiring tools in the UK, start auditing for meaningful human involvement and bias testing.</p><p><strong>31 March 2026</strong> | USA | California Governor signs Executive Order N-5-26 on AI government procurement standards. If you sell to California state agencies, start tracking what procurement standards are being written. Final requirements will be published by Q3 2026.</p><p><strong>28 March 2026</strong> | USA | NPR analysis shows states pushing back on Trump administration&#8217;s preemption push. Federal-state AI law battle is accelerating. If you operate across state lines, assume state laws are binding for now.</p><p><strong>20 March 2026</strong> | USA | White House releases National Policy Framework for AI with broad preemption proposal. The federal-state legal battle is formally on. Watch for litigation updates.</p><p><strong>26 March 2026</strong> | EU | European Parliament confirms position on Digital Omnibus. AI Act compliance deadlines are now locked in: 2 December 2027 for high-risk systems, 2 August 2028 for general-purpose AI.</p><p><strong>29 May 2026</strong> | UK | Deadline for feedback on ICO consultation on automated recruitment decisions. If you use or sell hiring tools in the UK, respond to the consultation to shape final guidance.</p><p><strong>26 May 2026</strong> | UK | Consultation deadline for &#8220;Growing up in the online world,&#8221; including AI chatbot and generative AI obligations. Submit feedback if your product affects children.</p><p><strong>2 August 2026</strong> | EU | Each EU member state must establish at least one AI regulatory sandbox. Sandboxes provide liability protection if you follow sandbox guidelines in good faith while testing AI systems. This is your window to enter a sandbox if your system qualifies.</p><p><strong>August 2026</strong> | EU | EU AI Office enforcement powers activate. The office can now issue fines up to 3 percent of global annual turnover or 15 million euros.</p><p><strong>28 April 2026</strong> | EU | Next Digital Omnibus trilogue session. Watch for movement on copyright protections, user liability, and AI-generated content marking timelines. Final agreement expected by May 2026.</p><p><strong>June 2026</strong> | USA | Colorado AI Act impact assessment requirement deadline for high-risk systems. If you operate in Colorado and use AI for hiring, lending, or consequential decisions, this is your next hard deadline.</p><p><strong>2 December 2027</strong> | EU | Compliance deadline for high-risk AI systems (Annex III) under the AI Act. If you sell hiring, credit, insurance, biometrics, or consequential decision-making tools to the EU, this is your deadline.</p><div><hr></div><p>Sources:</p><ul><li><p><a href="https://www.ropesgray.com/en/insights/alerts/2026/03/the-white-house-legislative-recommendations-national-policy-framework-for-artificial-intelligence-an">White House National AI Policy Framework</a></p></li><li><p><a href="https://regulations.ai/regulations/RAI-US-NA-ENFORCE-2026">FTC Policy Statement on AI and Section 5 of the FTC Act</a></p></li><li><p><a href="https://www.dlapiper.com/en-us/insights/publications/2026/04/california-governor-issues-executive-order-on-ai-procurement-standards">California Governor Signs Executive Order N-5-26 on AI Government Procurement Standards</a></p></li><li><p><a href="https://www.npr.org/2026/03/28/nx-s1-5755062/trump-wants-a-deadlocked-congress-to-move-on-ai-frustrated-states-say-they-already-have">Trump wants a deadlocked Congress to move on AI. Frustrated states say they already have</a></p></li><li><p><a href="https://www.nicfab.eu/en/posts/digital-omnibus-ai-trilogue/">Digital Omnibus on AI: Trilogue Negotiations Begin</a></p></li><li><p><a href="https://artificialintelligenceact.eu/ai-regulatory-sandbox-approaches-eu-member-state-overview/">EU AI Regulatory Sandbox Approaches: Member State Overview</a></p></li><li><p><a href="https://privacymatters.dlapiper.com/2026/04/uk-ico-report-on-automated-decision-making-in-recruitment/">ICO Report on Automated Decision-Making in Recruitment</a></p></li><li><p><a href="https://www.farrer.co.uk/news-and-insights/using-ai-in-recruitment-new-guidance-from-the-ico-on-key-compliance-obligations/">UK ICO Publishes Guidance on AI and Automated Recruitment Decisions</a></p></li><li><p><a href="https://epthinktank.eu/2026/04/01/ai-regulatory-sandboxes-state-of-play-and-implementation-challenges/">AI Regulatory Sandboxes: State of Play and Implementation Challenges</a></p></li><li><p><a href="https://www.europarl.europa.eu/thinktank/en/document/EPRS_ATA(2026)785685">Parliamentary Think Tank: Parliament&#8217;s emerging position on the Digital Omnibus</a></p></li><li><p><a href="https://www.iedconline.org/news/2026/03/10/federal-policy-updates/u.s.-house-passes-small-business-ai-advancement-act/">Small Business AI Advancement Act Passes House</a></p></li><li><p><a href="https://www.onetrust.com/blog/where-ai-regulation-is-heading-in-2026-a-global-outlook/">Where AI Regulation is Heading in 2026: A Global Outlook</a></p></li></ul><p></p><div><hr></div><h6><em><strong>DISCLAIMER</strong></em></h6><h6><em>The information is intended to be helpful but is in no way a substitute for seeking professional advice for your specific situation or intent. This applies to business, financial, legal, or other matters discussed herein. Please read the full <a href="https://aigovernanceplaybook.substack.com/p/disclaimer">DISCLAIMER</a></em></h6><h6><em><strong>Copyright Notice</strong></em></h6><h6><em>The information in this document is protected by international copyright laws. You may use the information for personal purposes but you are not permitted to duplicate it or distribute it to anyone else. This copy is for you only.</em></h6>]]></content:encoded></item><item><title><![CDATA[Act Now Brief | Monday 30 March 2026 ]]></title><description><![CDATA[For founders, operators, and creators who are using AI | USA &#183; UK &#183; EU]]></description><link>https://www.aigovernanceplaybook.com/p/ai-act-now-brief-monday-30-march-2026</link><guid isPermaLink="false">https://www.aigovernanceplaybook.com/p/ai-act-now-brief-monday-30-march-2026</guid><pubDate>Mon, 30 Mar 2026 14:10:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/92d5e9bf-b629-4f83-b8ff-52de2ae5ccf6_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>&#127482;&#127480; USA | FTC just proved it will enforce its AI policy statement with real penalties</h2><p>On March 30, the FTC announced a settlement with OkCupid and Match Group Americas over undisclosed data sharing for AI training. The company gave an AI firm access to 3 million user photos and location data without consent. OkCupid&#8217;s privacy policy did not mention this sharing. The FTC treated this as a deceptive practice under Section 5 of the FTC Act, the same statute the Commission flagged on March 11 as its enforcement tool for AI.</p><p>This isn&#8217;t a theoretical threat. This is the FTC doing what it said it would do. The settlement bars OkCupid from misrepresenting data practices for 20 years and requires compliance reporting for 10 years. No admission of liability, but the path to future penalties is now open. This matters because OkCupid was doing what many companies do: sharing user data with third parties to train models. If your business model involves data sharing with AI partners, and your users don&#8217;t explicitly consent, you&#8217;re exposed right now.</p><p><strong>So what?</strong></p><p><strong>If you&#8217;re building AI products:</strong> If you acquire training data through partnerships that involve user data, your privacy disclosures must explicitly list the partners and the use. Silence is now a prosecutable offense.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you share employee, customer, or user data with AI vendors or third parties, audit your contracts and privacy disclosures this week. Undisclosed sharing is now an FTC enforcement priority.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients the FTC just showed its hand. Data sharing for AI training is under Section 5 scrutiny now. Get explicit disclosure and consent mechanisms in place before you touch user data.</p><p><strong>Who feels this most:</strong></p><ul><li><p><strong>Data and AI platforms:</strong> If you use user data for model training or fine-tuning, you need explicit consent for each use case and each third party you share with. The absence of disclosure is now an enforcement liability.</p></li><li><p><strong>Dating, social, and consumer apps:</strong> You&#8217;re collecting rich personal data. If any of it flows to third parties without explicit disclosure, you&#8217;re the next case study.</p></li></ul><div><hr></div><h3>&#127466;&#127482; EU | Still no movement on Digital Omnibus. Build for August 2.</h3><p>The Digital Omnibus vote happened on March 26 as scheduled. It didn&#8217;t pass. This means the August 2, 2026 deadline for EU AI Act high-risk compliance remains in place. No delay. No extension. Same deadline as last week. Same urgency as last week.</p><p>Only 8 of 27 EU member states have designated their AI Act enforcement authorities. This creates enforcement chaos at first, but enforcement will happen. If you&#8217;re shipping high-risk AI into the EU (hiring, lending, healthcare, criminal justice), you need conformity assessment, technical documentation, and CE marking by August 2. That&#8217;s four months away. If you haven&#8217;t started this work, you&#8217;re months behind.</p><p><strong>So what?</strong></p><p><strong>If you&#8217;re building AI products:</strong> If your system qualifies as high-risk under Annex III, your compliance timeline is real. August 2 isn&#8217;t negotiable. Get your conformity assessment scope defined this week.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you deploy high-risk AI in the EU, you&#8217;re liable for impact assessments, risk documentation, and consumer notice. You can&#8217;t delegate compliance to the vendor. Do your audit this week.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients the Digital Omnibus didn&#8217;t pass. August 2 is the deadline. Build a working backwards timeline from that date. You need compliance scope, vendor capacity, and testing done by mid-June.</p><div><hr></div><h3>&#127468;&#127463; UK | FCA has named AI financial chatbots as an active enforcement risk</h3><p>On 26 March, the Financial Conduct Authority (FCA) published its latest perimeter report, explicitly flagging AI-powered personal finance tools and chatbots that offer what amounts to regulated financial advice without authorisation. The FCA&#8217;s perimeter reports are how the agency signals where formal enforcement attention is heading. This is not a consultation, and it isn&#8217;t speculative.</p><p>The risk is specific. An AI tool positioned as &#8220;guidance&#8221; that ends up recommending a specific product, summarising a pension&#8217;s exit fees, or suggesting a fund crosses into regulated advice under the Financial Services and Markets Act 2000 (FSMA). The firm that deployed it carries the liability. Consumer Duty obligations add a second layer: if your AI produces a hallucinated rate of return and a customer acts on it, you&#8217;re exposed regardless of how your terms of service describe the product. The FCA has said that unsupervised generative AI should not be used for substantive financial communications.</p><p><strong>So what?</strong></p><p><strong>If you&#8217;re building AI products:</strong> If your product touches personal finance in the UK, check this week whether any output it produces could be read as a specific financial recommendation. The FCA has now explicitly named this category as enforcement territory.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you&#8217;re using an AI tool that produces anything a customer might construe as financial guidance, document how you&#8217;re supervising its outputs. &#8220;We didn&#8217;t know&#8221; is not a Consumer Duty defence.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell fintech clients that the March 26 perimeter report is their signal to get their FSMA and Consumer Duty mapping done before the FCA launches a thematic review into this space.</p><p><strong>Who feels this most:</strong></p><ul><li><p><strong>Fintech and wealthtech:</strong> You&#8217;re the named category. The FCA has told you it&#8217;s looking here. The burden of demonstrating you&#8217;re on the right side of the regulated/unregulated line is yours.</p></li><li><p><strong>HR and benefits tools:</strong> If your AI helps employees understand pension or salary sacrifice options, you may be closer to the advice boundary than you think.</p></li></ul><div><hr></div><h3>&#127468;&#127463; UK | ICO still in engagement phase. Documentation remains your shield.</h3><p>The Information Commissioner&#8217;s Office (ICO) remains in the monitoring and engagement phase. No new enforcement actions this week. Still building its picture of data protection practices at foundation model developers and deployers. When the ICO launches enforcement cases, your documentation will be the evidence that proves or disproves your compliance.</p><p>Have bias testing results. Have data processing records. Have impact assessments. Have risk documentation. Have transparency disclosures. If the ICO asks and you have to say these documents don&#8217;t exist, you&#8217;ve compounded your compliance problem with an enforcement problem.</p><p><strong>So what?</strong></p><p><strong>If you&#8217;re building AI products:</strong> If you&#8217;re a UK AI company or building for UK use, document your testing, bias assessments, and risk mitigation now. These are the artifacts that shield you in enforcement.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use AI for hiring, benefits, or automated decisions, run an audit of your documentation this week. The ICO will ask. Be ready to show it.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients the ICO engagement with foundation models is building toward enforcement. The question isn&#8217;t whether to document, it&#8217;s whether to do it now or after the ICO knocks. Do it now.</p><div><hr></div><h2>&#128993; Heads up</h2><h3>&#127482;&#127480; USA | Oregon and Washington just created private liability for AI companion chatbots</h3><p>Oregon SB 1546 (signed March 31) and Washington HB 2225 (signed March 24) are now law. Both regulate AI systems designed to simulate sustained human relationships through natural language conversation. This includes most consumer-facing LLMs with personalization features.</p><p>Both states require disclosure that the AI isn&#8217;t human (every hour for minors, every three hours for adults). Both prohibit manipulative tactics like faking emotional distress when a user tries to leave. Oregon adds a private right of action: $1,000 per violation per user, plus attorney fees for plaintiffs. Washington relies on attorney general enforcement only.</p><p>The private right of action is the enforcement mechanism you need to worry about. One class action lawsuit from users in Oregon could cost you millions. If you sell consumer-facing AI into these states, you need to be compliant before January 1, 2027 when these laws take effect. Start assessing your disclosure and safeguard mechanisms now.</p><p><strong>So what?</strong></p><p><strong>If you&#8217;re building AI products:</strong> If you build conversational AI that could be characterized as sustaining a relationship, audit your disclosure practices and safeguard rules for Oregon and Washington compliance. You have nine months.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you deploy chatbots or conversational AI systems that interact with consumers, check whether you need to comply with Oregon and Washington rules. Minimum: disclosure and no manipulative tactics.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients Oregon and Washington just created private liability. This is different from Federal Trade Commission enforcement or state AG action. A class of users suing for statutory damages is a real business risk. Plan for compliance.</p><p><strong>Who feels this most:</strong></p><ul><li><p><strong>Consumer AI and conversational AI:</strong> You&#8217;re building the exact systems these laws target. Check your disclosure mechanisms and safeguard rules before January 1, 2027.</p></li><li><p><strong>Dating and social platforms:</strong> If your app has features that simulate relationships or use personalization to sustain engagement, Oregon and Washington have you in their sights.</p></li></ul><h3>&#127482;&#127480; USA | California AB 1883 advances. Workplace surveillance AI is being regulated.</h3><p>California AB 1883 passed committee 5-0 on March 19. The bill prohibits employers from using AI surveillance tools to infer protected characteristics about workers (race, gender, disability status, etc.) or to use predictive behavior analysis on workers. Violations carry penalties of up to $500 per employee per incident. The bill is still in the legislature, but the margin in committee signals it will likely pass.</p><p>This is separate from the broader California workplace AI bills (AB 1898 on notice/disclosure, SB 947 on automated hiring decisions). If you build HR tech, employee monitoring tools, or worker scoring systems, California is now regulating three different aspects of your product. The patchwork&#8217;s expanding, not shrinking.</p><p><strong>So what?</strong></p><p><strong>If you&#8217;re building AI products:</strong> If you build HR tech, hiring tools, or employee monitoring systems, assume California will regulate you. Plan for both notice/disclosure requirements and restrictions on inference and predictive behavior analysis.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use AI for hiring, performance management, or employee monitoring in California, your vendors may soon face legal restrictions. Start asking vendors what they&#8217;re doing to comply with AB 1883.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients California is regulating AI surveillance and bias in three separate bills. Build for the strictest rule. The private right of action in these bills means class action risk, not just regulator enforcement.</p><div><hr></div><h2>&#128994; On the radar</h2><ul><li><p><strong>Colorado AI Act still set for June 30 enforcement:</strong> No change since last week. Unless the legislature amends it further, high-risk AI systems affecting employment, housing, credit, and healthcare will require compliance by June 30. That&#8217;s three months away.</p></li><li><p><strong>White House federal preemption litigation task force moving forward:</strong> The Department of Justice (DOJ) is actively challenging state AI laws on constitutional grounds. This will take time to resolve. Build for state compliance now and don&#8217;t assume federal preemption arguments will succeed in court.</p></li><li><p><strong>Arizona HB 2311 chatbot bill moving:</strong> Arizona House set for hearing on chatbot labeling and disclosure. Small scope, but part of a growing trend of states regulating how you describe your AI&#8217;s capabilities.</p></li><li><p><strong>UK ADM Code of Practice still in development:</strong> The ICO will develop statutory guidance on automated decision-making (hiring, benefits, lending, etc.), informed by its ongoing engagement with foundation model developers. No deadline yet, but expect it later this year.</p></li></ul><div><hr></div><h2>The one thing to do this week</h2><p>Two things tied for equal urgency this week, so pick the one that fits your business. If you&#8217;re in UK fintech or any product that touches personal finance, read the FCA&#8217;s March 26 perimeter report and map your outputs against the regulated/unregulated advice line. If you&#8217;re not in fintech, check your user-facing privacy policy against the FTC&#8217;s OkCupid settlement: does it name every third party that receives user data, every use case, and every AI partner you share with? If it doesn&#8217;t, fix it before the FTC comes to you.</p><div><hr></div><h2>Deadline tracker</h2><p><strong>&#127466;&#127482; EU</strong> | High-risk AI systems must be compliant with EU AI Act rules (conformity assessment, CE marking, registration) | August 2, 2026 | Four months away. Digital Omnibus didn&#8217;t pass. Assume the deadline holds.</p><p><strong>&#127482;&#127480; USA</strong> | Colorado AI Act enforcement begins | June 30, 2026 | Three months away. High-risk systems in employment, housing, credit, health, and education.</p><p><strong>&#127482;&#127480; USA</strong> | Oregon SB 1546 and Washington HB 2225 take effect (AI companion chatbots) | January 1, 2027 | Nine months away. Private right of action in Oregon ($1,000 per violation).</p><p><strong>&#127482;&#127480; USA</strong> | California AB 1883 vote in legislature | TBD 2026 | Passed committee 5-0 March 19. High likelihood of passage. Workplace surveillance, AI restrictions, and bias inference prohibitions.</p><p><strong>&#127482;&#127480; USA</strong> | California AB 853 (delayed AI transparency rules) now due | August 2, 2026 | Requires training data summary for AI systems trained on 1M+ data points.</p><p><strong>&#127468;&#127463; UK</strong> | ICO finishes engagement with 11 foundation model developers | TBD 2026 | Findings will shape enforcement priorities. Expect enforcement cases to follow.</p><p><strong>&#127468;&#127463; UK</strong> | UK government to develop statutory ADM Code of Practice | TBD 2026 | Secondary legislation required. Will provide guidance on automated decision-making. Timing uncertain.</p><div><hr></div><p>Sources:</p><ul><li><p><a href="https://www.ftc.gov/news-events/news/press-releases/2026/03/ftc-takes-action-against-match-okcupid-deceiving-users-sharing-personal-data-third-party">FTC Takes Action Against Match and OkCupid for Deceiving Users on Data Sharing</a></p></li><li><p><a href="https://regulations.ai/regulations/RAI-US-NA-ENFORCE-2026">FTC Policy Statement on AI and Section 5 of the FTC Act</a></p></li><li><p><a href="https://www.whitehouse.gov/releases/2026/03/president-donald-j-trump-unveils-national-ai-legislative-framework/">White House National Policy Framework for Artificial Intelligence</a></p></li><li><p><a href="https://artificialintelligenceact.eu/implementation-timeline/">EU AI Act Implementation Timeline</a></p></li><li><p><a href="https://ico.org.uk/about-the-ico/our-information/our-strategies-and-plans/artificial-intelligence-and-biometrics-strategy/ai-and-biometrics-strategy-update-march-2026/">ICO AI and Biometrics Strategy Update March 2026</a></p></li><li><p><a href="https://www.resultsense.com/news/2026-04-09-uk-fca-pra-ai-approach-2026">UK FCA, PRA hold principles-based AI line in 2026</a></p></li><li><p><a href="https://www.insideglobaltech.com/2026/04/09/uk-financial-services-regulators-approach-to-artificial-intelligence-in-2026/">UK financial services regulators AI approach 2026</a></p></li><li><p><a href="https://www.transparencycoalition.ai/news/guide-to-oregon-ai-chatbot-safety-bill-sb1546">Oregon SB 1546: AI Chatbot Safety Bill Guide</a></p></li><li><p><a href="https://www.lexology.com/library/detail.aspx?g=14e46be7-1703-4159-b57c-42f0f7d579bd">Washington HB 2225 and Oregon SB 1546 AI Companion Laws</a></p></li><li><p><a href="https://www.troutmanprivacy.com/2026/03/proposed-state-ai-law-update-march-23-2026/">California AB 1883 Workplace Surveillance AI Bill</a></p></li><li><p><a href="https://www.hunton.com/privacy-and-information-security-law/enforcement-of-colorado-ai-act-delayed-until-june-2026">Colorado AI Act Enforcement Delayed to June 2026</a></p></li></ul><p></p><div><hr></div><h6><em><strong>DISCLAIMER</strong></em></h6><h6><em>The information is intended to be helpful but is in no way a substitute for seeking professional advice for your specific situation or intent. This applies to business, financial, legal, or other matters discussed herein. Please read the full <a href="https://aigovernanceplaybook.substack.com/p/disclaimer">DISCLAIMER</a></em></h6><h6><em><strong>Copyright Notice</strong></em></h6><h6><em>The information in this document is protected by international copyright laws. You may use the information for personal purposes but you are not permitted to duplicate it or distribute it to anyone else. This copy is for you only.</em></h6>]]></content:encoded></item><item><title><![CDATA[What's Building | Thursday 26 March 2026]]></title><description><![CDATA[A forward-looking briefing on AI regulation across the USA, UK, and EU. What's moving through legislative pipelines right now, and what it means for your business in the next three to six months.]]></description><link>https://www.aigovernanceplaybook.com/p/ai-whats-building-thursday-26-march</link><guid isPermaLink="false">https://www.aigovernanceplaybook.com/p/ai-whats-building-thursday-26-march</guid><pubDate>Thu, 26 Mar 2026 15:34:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/12d3ac62-5b1f-46c3-8971-2f12f5d52ff4_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>On the watchlist</h2><p><em>Things moving through regulatory pipelines that will matter in the next 3 to 6 months</em></p><h3>&#127482;&#127480; USA | Federal preemption and the state law showdown</h3><p><strong>Status: &#128308; Watch - The White House just picked a fight it may lose, and your state laws are now in legal limbo</strong></p><h3>What is happening</h3><p>On 20 March 2026, the White House released a National Policy Framework for Artificial Intelligence proposing broad federal preemption of state AI laws. The framework directs the Attorney General to establish an AI Litigation Task Force (already active since January) to challenge state laws in federal court, particularly those requiring &#8220;alterations to truthful outputs&#8221; of AI models or imposing liability on developers for third-party misuse. </p><p>The Commerce Department published a report by the 11 March deadline identifying state laws it considers overly burdensome, with California, Colorado, and Texas specifically flagged. This isn&#8217;t legislative yet. Congress rejected comparable preemption proposals in the One Big Beautiful Bill Act and the National Defense Authorization Act. The legal strategy relies on the Dormant Commerce Clause theory, arguing that state laws place undue burdens on interstate commerce.</p><h3>Why founders should care</h3><p>This is the biggest structural question facing AI founders right now. If the Administration wins these cases, your patchwork compliance burden collapses into a single federal floor. If it loses, the current state-by-state regime stays, and you&#8217;re managing 50 different sets of rules. The litigation will take time (years, likely), but it affects your roadmap decisions now. If you&#8217;re operating across state lines, you&#8217;ve already been dealing with Colorado&#8217;s impact assessment requirements (now delayed to June 2026), New York City&#8217;s bias audit mandate for hiring tools, and California&#8217;s automated decision system rules. </p><p>These lawsuits won&#8217;t immediately kill state laws, but they create uncertainty that may discourage future state action. For now, assume state laws are binding and build your compliance infrastructure accordingly. The federal government&#8217;s push for preemption is a real attempt to reduce your burden, but it&#8217;s not a done deal.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> If you&#8217;re selling hiring, lending, or decision-making tools across state lines, your compliance roadmap depends on which way these lawsuits go. For now, build for the hardest standard (Colorado, California, NYC) and treat federal-only compliance as a bonus if it wins. Mark July 2026 as the decision point for whether you need a six-month delay in your roadmap.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use AI for hiring, lending, or employment decisions, the state where your employees and applicants live matters more than federal law right now. NYC requires bias audits and advance notice for hiring tools. California and Colorado require impact assessments for high-risk systems. Colorado&#8217;s deadline moved to June 2026. Audit your tools by May 2026 to know whether you&#8217;re compliant.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients that state law isn&#8217;t optional and federal preemption isn&#8217;t certain. Help them map which states they operate in and what that means for the timeline and cost. If a client is in Colorado, NYC, or California and hasn&#8217;t done impact assessments or bias audits, they&#8217;re now late.</p><h3>Who feels this most</h3><ul><li><p><strong>HR / Hiring:</strong> New York City&#8217;s bias audit requirement is live now for automated employment decision tools. You must publish audit summaries online, notify candidates 10 business days before use, and offer an alternative process if they request one. Illinois now allows candidates to sue directly if they believe they were discriminated against via AI hiring tools, without filing a complaint first. Colorado&#8217;s impact assessment deadline is June 2026 (moved from January). If you sell hiring tools, assume all three states are your compliance floor.</p></li><li><p><strong>Financial Services:</strong> Credit scoring, loan approval, and fraud detection are all high-risk under state laws and will be under federal law when the preemption question settles. Multi-state banks should assume state AI laws remain binding. The Federal Reserve has updated model risk management guidance for AI systems. If you approve loans, assume you need impact assessments in Colorado, California, and any state where you originate credit.</p></li><li><p><strong>Retail &amp; AdTech:</strong> Algorithmic decision-making in advertising and pricing is not yet explicitly regulated at the state level, but the FTC&#8217;s Section 5 policy statement (see below) makes deceptive AI-generated content and algorithmic discrimination enforceable concerns. If you use AI to target ads or set prices, the FTC is watching.</p></li></ul><div><hr></div><h3>&#127482;&#127480; USA | FTC&#8217;s Section 5 enforcement playbook is live</h3><p><strong>Status: &#128993; Watch - The FTC just gave itself a blank check to enforce AI practices you may not know are illegal</strong></p><h3>What is happening</h3><p>The Federal Trade Commission published its Policy Statement on AI and Section 5 of the FTC Act on 11 March 2026. The statement doesn&#8217;t introduce new rules but interprets the 100-year-old Section 5 prohibition on unfair and deceptive practices to apply directly to AI systems across their entire lifecycle. The FTC signals enforcement priorities: algorithmic discrimination, deceptive AI-generated content, privacy violations tied to AI data collection, non-transparent automated decision-making, and false or misleading claims about AI safety. </p><p>The statement also targets state laws that require AI models to alter &#8220;truthful outputs,&#8221; arguing they may themselves be deceptive under federal law. This creates a novel preemption argument: states can&#8217;t force you to censor truthful AI outputs because doing so would be deceptive under Section 5.</p><h3>Why founders should care</h3><p>This is a big shift in enforcement tone. The FTC isn&#8217;t waiting for Congress or new legislation. It&#8217;s repurposing existing authority to enforce AI practices, and it&#8217;s being explicit about what it&#8217;s watching: generated content (deepfakes, manipulated images), discrimination, transparency, and safety claims. </p><p>If you generate content, use AI to make decisions about people, train models on data you collected without clear consent, or claim your AI is safe or unbiased, you&#8217;re in the FTC&#8217;s enforcement perimeter. The Section 5 standard is broad, which gives the FTC room to act but also means the boundaries aren&#8217;t crystal clear. Expect enforcement action in the next 12 months, particularly against companies making safety or fairness claims they can&#8217;t substantiate.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> Audit your content generation, disclosure practices, and data sourcing. The FTC is watching for: generated content without labels, claims about AI safety or unbiasedness you can&#8217;t prove, and data collection practices that lacked clear user consent. If you generate images, text, or audio, label it. If you make safety claims, document them. Document data sourcing and user consent for training.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use AI to screen candidates, approve credit, detect fraud, or make decisions affecting people, document that you&#8217;ve tested the system for discrimination. The FTC is enforcing anti-discrimination law through the AI lens. If you don&#8217;t have evidence that your system doesn&#8217;t discriminate, fix it before the FTC comes calling.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients to assume the FTC will challenge any claim about AI that they can&#8217;t prove, and to treat their AI systems the way they would treat a consumer product: disclosed features, tested safety claims, and transparent decision-making. Section 5 is now an active enforcement territory.</p><h3>Who feels this most</h3><ul><li><p><strong>Media &amp; Publishing:</strong> If you generate images, deepfakes, or synthetic media, the FTC is now explicitly watching for deceptive labeling and unlabeled AI content. The policy statement flags AI-generated advertising as a key concern. Label all generated content clearly. Don&#8217;t make unlabeled deepfake ads.</p></li><li><p><strong>Financial Services:</strong> The FTC and Consumer Financial Protection Bureau are coordinating on algorithmic discrimination in lending. The FTC&#8217;s Section 5 statement makes clear it will enforce anti-discrimination law through the AI lens. If you use AI to score credit, price loans, or detect fraud, document non-discrimination testing.</p></li><li><p><strong>HR / Hiring:</strong> Same enforcement story as financial services. If you use AI to screen resumes, conduct interviews, or rank candidates, assume the FTC is watching for discrimination. Documentation of fairness testing is now your first line of defense.</p></li></ul><div><hr></div><h3>&#127466;&#127482; EU | Digital Omnibus trilogue just started, and the deadlines are now firm</h3><p><strong>Status: &#128993; Watch - EU AI Act timelines just locked in for another 18 months, with compliance deadlines now fixed through 2028</strong></p><h3>What is happening</h3><p>On 13 March 2026, the EU Council agreed its negotiating position on the Digital Omnibus (the package of AI Act amendments). On 26 March, the European Parliament confirmed its position. Trilogue negotiations (the three-way deal-making between Council, Parliament, and Commission) have now begun, with the next negotiation session scheduled for 28 April 2026. </p><p>The key outcome: fixed deadlines for high-risk AI systems are confirmed. Annex III systems (like biometrics and hiring tools) must comply by 2 December 2027. Annex I systems (foundational models and general-purpose AI) must comply by 2 August 2028. Parliament has also introduced a targeted ban on AI systems that generate sexual or intimate content without consent. The most visible disagreement is over how long companies get to mark AI-generated content (Parliament wants three months, Council wants longer).</p><h3>Why founders should care</h3><p>These deadlines are now locked in across EU institutions, which means they&#8217;ll likely survive trilogue negotiations. If you&#8217;re selling to Europe, particularly in hiring, credit, or biometrics, your EU deadline is 2 December 2027, assuming your system is classified as high-risk. That&#8217;s roughly 20 months away. The AI Act&#8217;s definition of high-risk includes most automated decision-making that affects people&#8217;s rights or opportunities. </p><p>The deadline pressure is real. For foundational model providers (Claude, GPT, etc.), compliance is due by August 2028. For small teams using AI to build customer-facing products, the December 2027 deadline is the wall you&#8217;re approaching. The EU enforcement regime comes into force in August 2026, meaning the EU AI Office will have the power to issue fines starting then (up to 3 percent of global annual turnover or 15 million euros, whichever is higher).</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> If you sell to the EU and your product makes decisions about hiring, credit, insurance, benefits, or other consequential outcomes, you have 20 months to comply with the AI Act. That means now&#8217;s the time to: classify your system (is it actually high-risk under the EU definition?), document your training data, design your transparency features, and plan your conformity assessment. If you&#8217;re still unsure whether your system counts as high-risk, you&#8217;re behind. Start the classification question immediately.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use AI in the EU for hiring, lending, or decisions affecting people, assume you&#8217;re subject to the AI Act as a user, not just a provider. You may need to ensure the systems you buy from are compliant. The liability question (who&#8217;s responsible if the system fails?) is still being negotiated in trilogue, but you&#8217;re likely on the hook in some form. Track the trilogue outcome around user liability (decisions expected by late April).</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients with EU exposure that the 2 December 2027 deadline for high-risk systems is now legally locked in across EU institutions. It&#8217;s the working assumption for negotiations. Clients should assume this deadline will hold and plan for it. If they sell HR, credit, or insurance products to the EU, they have 20 months and should start conformity assessment planning now.</p><h3>Who feels this most</h3><ul><li><p><strong>HR / Hiring:</strong> Your EU deadline is 2 December 2027. Hiring tools are explicitly high-risk under the AI Act. Start documentation, fairness testing, and impact assessment immediately. The EU Office&#8217;s enforcement powers start in August 2026, so by then you should have a credible compliance plan for hitting the December 2027 deadline.</p></li><li><p><strong>Financial Services:</strong> Credit scoring, loan approval, and fraud detection are high-risk. Same 2 December 2027 deadline. This is a hard deadline for any bank or fintech selling to the EU.</p></li><li><p><strong>Healthcare:</strong> Healthcare AI is not uniformly high-risk, but diagnostic and treatment recommendation systems are. If you sell to EU hospitals or clinics, classify your system now. If it is high-risk, you have 20 months to comply.</p></li></ul><div><hr></div><h3>&#127468;&#127463; UK | Copyright and AI policy remains frozen, but the door is cracking open</h3><p><strong>Status: &#128993; Watch - The UK government backed away from its own opt-out proposal, which means copyright and AI in the UK is now a live policy question with no clear answer</strong></p><h3>What is happening</h3><p>On 19 March 2026, the UK Department for Science, Innovation and Technology (DSIT) published its response to a December 2024 consultation on copyright and artificial intelligence. The government had previously favored an opt-out system: allow AI companies to train on copyright works, but let rights holders opt out. This proposal was overwhelmingly rejected by the creative industries (film, music, publishing, and visual arts). </p><p>The government&#8217;s March response confirms it&#8217;s abandoning the opt-out option and has no new preferred policy to replace it. Instead, the UK is adopting a &#8220;wait and see&#8221; approach, monitoring ongoing litigation (Getty Images v Stability AI in the US, similar cases in the UK), international developments, and market dynamics before committing to any reform. The government states it intends to legislate on AI and copyright eventually, but has no timeline.</p><h3>Why founders should care</h3><p>The UK copyright question is now genuinely open-ended. The government has rejected its own best idea and is waiting for court cases and market forces to resolve the issue. If you&#8217;re training large language models or image generators on copyright material, you&#8217;re technically violating UK copyright law (there&#8217;s no AI-specific exception for training). </p><p>The government isn&#8217;t enforcing this aggressively right now, but that could change if courts in the US (Getty case) or the EU (where similar litigation is ongoing) rule against AI companies. For now, you have a grace period, but it&#8217;s not a long one. By late 2026 or 2027, the government is likely to revisit this question, possibly with a different answer based on what courts decide. If you&#8217;re UK-based and training on copyright material, assume you&#8217;re taking a risk that may be solved by policy, by court, or by market consolidation.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> If you train generative models on copyright material and serve UK customers, you&#8217;re technically not covered by a copyright exception for AI training (unlike some EU countries, the UK doesn&#8217;t have an AI-specific TDM exception). You&#8217;re in a legal gray zone. The government isn&#8217;t enforcing it now, but courts may clarify the issue by late 2026. If you&#8217;re based in the UK, monitor the Getty case outcome and any UK copyright litigation closely. If you&#8217;re based elsewhere and selling to the UK, you have more breathing room, but don&#8217;t assume it&#8217;s permanent.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use generative AI trained on copyright material (almost all large models do this), understand that the UK government&#8217;s position is &#8220;we don&#8217;t know yet.&#8221; This is lower risk than the EU AI Act, but it&#8217;s not zero risk. No action needed now, but track the Getty case and any UK copyright decisions closely.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients with UK copyright exposure that the government has paused reform and is waiting for courts and international developments to clarify the issue. This buys time, but it isn&#8217;t a permanent solution. Copyright enforcement risk in the UK is lower than in the EU (which is actively negotiating copyright protections in the Digital Omnibus) but higher than in the US (which has broad fair use).</p><h3>Who feels this most</h3><ul><li><p><strong>Media &amp; Publishing:</strong> If you&#8217;re training AI models on published works (books, articles, images), the UK government&#8217;s wait-and-see approach means you&#8217;re in a policy limbo that could swing either way. The EU is negotiating stronger copyright protections for creators; the UK is waiting. If you&#8217;re a rights holder, the UK is currently less protective than the EU. If you&#8217;re a model builder, the UK is currently less restrictive than the EU, but that could flip.</p></li></ul><div><hr></div><h3>&#127468;&#127463; UK | Children&#8217;s online safety regulations now include AI obligations</h3><p><strong>Status: &#128993; Watch - A new consultation landed on 2 March with tough new rules for AI chatbots and generative AI services, open for comment until 26 May</strong></p><h3>What is happening</h3><p>On 2 March 2026, the UK Department for Science, Innovation and Technology (DSIT) launched &#8220;Growing up in the online world: a national conversation,&#8221; a consultation on online child safety. The consultation includes explicit obligations for AI chatbots and generative AI services. The proposed measures include: stronger age assurance mechanisms, a potential statutory minimum age for social media, raising the UK&#8217;s age of digital consent (currently 13), restrictions on features like livestreaming and disappearing messages, and new obligations for AI chatbots and generative AI to protect children. </p><p>On 12 March, the UK&#8217;s Information Commissioner&#8217;s Office (ICO) and the communications regulator Ofcom issued coordinated letters to social media and video platforms demanding compliance action beyond self-declaration. The consultation closes on 26 May 2026.</p><h3>Why founders should care</h3><p>If you&#8217;re building a chat product, a generative AI tool accessible to children, or any service that might attract users under 16, this consultation is directly about you. The UK is moving toward age-gating and content safety obligations for AI. This isn&#8217;t final regulation yet (consultation closes 26 May), but the direction is clear: AI companies serving the UK will need to build age verification and child safety features. The consultation is broad, which means the final rules could be narrower or broader depending on feedback. For now, treat this as a signal: UK child safety regulation for AI is coming, probably within 12 to 18 months after consultation closes.</p><h3>So what?</h3><p><strong>If you&#8217;re building AI products:</strong> If your AI product is accessible to anyone under 16 in the UK, you need to think now about age verification (how do you know who&#8217;s using it?) and age-appropriate safeguards (what does your product do if a child accesses it?). The consultation is open until 26 May. If you have UK customers or users, consider responding to the consultation to shape the final rules. Implementation timeline is likely 2027 at the earliest.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use AI chatbots or generative AI services in the UK, track this consultation. If the final rules require age verification or content filtering, you&#8217;ll need to ensure your AI provider complies. For most small teams, this is a low priority right now, but watch for the final guidance in Q3 2026.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell UK-based clients that child safety regulation for AI is coming, and that consultation feedback (until 26 May) is your window to shape it. If your product serves children or could serve children, respond to the consultation. Final rules likely land in late 2026 or early 2027.</p><h3>Who feels this most</h3><ul><li><p><strong>EdTech:</strong> If you build educational AI products used by students under 16 in the UK, this consultation is directly about you. Age verification and content safety will be mandatory. Start thinking now about how to implement age checks and how to protect minors from inappropriate outputs.</p></li></ul><div><hr></div><h2>Probably noise</h2><p><em>Developments unlikely to affect founders in the next 12 months, and why</em></p><ul><li><p><strong>&#127466;&#127482; EU | GPAI Code of Practice finalized:</strong> The EU AI Office published the final GPAI (general-purpose AI) Code of Practice on 5 March 2026. This is soft law, not binding, and sets out voluntary best practices for AI model providers on transparency, copyright, and safety testing. It&#8217;s not an enforcement mechanism for small teams right now. If you&#8217;re a model provider (building Claude-like systems), pay attention. If you&#8217;re using AI in your business or building customer products, this is background noise for now.</p></li><li><p><strong>&#127468;&#127463; UK | AI Growth Lab sandboxes delayed:</strong> The UK DSIT opened consultation on its AI Growth Lab (regulatory sandboxes for testing AI under modified rules) in October 2025, with responses due 2 January 2026. No new updates in March. This remains a policy proposal with no confirmed timeline for implementation. It is interesting but not actionable yet. Revisit if the government announces a launch date.</p></li></ul><div><hr></div><h2>The pattern this week</h2><p>Three big moves converged this week, and they point in the same direction: regulation is shifting from legislative uncertainty to enforcement action. The US White House released its preemption framework, and the FTC published its Section 5 playbook on the same day (20 and 11 March, respectively). The EU locked in AI Act deadlines across institutions at the same moment trilogue negotiations began. The UK restarted conversations on copyright and child safety. None of these are final rules, but all of them are commitments. The US preemption fight is the wildcard. If the Administration wins in court, founders get a simpler compliance landscape. If it loses, the state-by-state mess continues. Either way, the next 12 months are when that gets decided, and your roadmap should account for both possibilities. </p><p>The EU deadlines are firm now (2 December 2027 for high-risk systems, 2 August 2028 for general-purpose AI). If you sell to Europe, those aren&#8217;t negotiable dates. The FTC&#8217;s Section 5 policy gives the agency explicit enforcement authority over AI practices. Expect enforcement actions by mid-2026. The UK is moving more slowly but in the same direction: child safety, copyright clarity, and AI-specific rules are all coming, likely in 2027. </p><p>The common thread is that regulators are no longer waiting for Congress or perfect legislative solutions. They&#8217;re using existing authority (FTC Section 5, EU AI Act, UK ICO powers, UK Ofcom powers) to enforce AI practices now, and using the next 12 to 24 months to build the case for more binding rules. For founders, this means: document your practices, test for discrimination, label generated content, and assume that ambiguity will be resolved against you, not for you. Regulation is moving from &#8220;should we?&#8221; to &#8220;how are you complying?&#8221;</p><div><hr></div><h2>Sector getting the most heat this week</h2><p><strong>Sector: HR / Hiring</strong></p><p>You&#8217;re in the regulatory crosshairs across all three jurisdictions right now. In the US, New York City&#8217;s bias audit requirements are live, Illinois allows candidates to sue directly for discrimination via AI, and Colorado&#8217;s impact assessment deadline is June 2026. The FTC&#8217;s Section 5 policy statement explicitly flags automated employment decisions as a focus area. In the EU, hiring tools are classified as high-risk under the AI Act, and your compliance deadline is 2 December 2027. </p><p>In the UK, the children&#8217;s safety consultation includes child-safe content from AI systems, which affects educational hiring tools. If you build, sell, or use AI for hiring, your 2026 is now allocated to compliance work. </p><p>New York City requires bias audits and advance notice. Colorado requires impact assessments. The EU requires high-risk classification, training data documentation, and conformity assessment. Start with whatever deadline is closest to you (Colorado is June 2026, NYC is already live). If you sell hiring tools to multiple states or the EU, pick the hardest standard you operate in and build for that. You&#8217;ll be compliant everywhere else.</p><div><hr></div><h2>Dates to put in your calendar</h2><p><strong>11 March 2026</strong> | USA | FTC Policy Statement on AI and Section 5 published. The FTC is now actively enforcing AI practices under existing authority. Review your disclosure practices, data sourcing, and any safety claims you make.</p><p><strong>19 March 2026</strong> | UK | DSIT published response to copyright and AI consultation. The government is adopting a wait-and-see approach. If you train models on copyright material, expect continued uncertainty, but no new enforcement action yet.</p><p><strong>20 March 2026</strong> | USA | White House releases National Policy Framework for AI with broad preemption proposal and executive order directing DOJ to challenge state laws. The federal-state legal battle is now formally on. Watch for litigation updates.</p><p><strong>26 March 2026</strong> | EU | European Parliament confirms position on Digital Omnibus. AI Act compliance deadlines are now locked in: 2 December 2027 for high-risk systems, 2 August 2028 for general-purpose AI.</p><p><strong>26 May 2026</strong> | UK | Consultation deadline for &#8220;Growing up in the online world,&#8221; including AI chatbot and generative AI obligations. Submit feedback if your product affects children.</p><p><strong>June 2026</strong> | USA | Colorado AI Act impact assessment requirement deadline for high-risk systems (moved from January). If you operate in Colorado, this is your next hard deadline.</p><p><strong>August 2026</strong> | EU | EU AI Office enforcement powers activate. The office can now issue fines up to 3% of global annual turnover or 15 million euros.</p><p><strong>28 April 2026</strong> | EU | Next Digital Omnibus trilogue session. Watch for movement on copyright protections, user liability, and AI-generated content marking timelines.</p><p><strong>2 December 2027</strong> | EU | Compliance deadline for high-risk AI systems (Annex III) under the AI Act. If you sell hiring, credit, insurance, biometrics, or consequential decision-making tools to the EU, this is your deadline.</p><div><hr></div><p>Sources:</p><ul><li><p><a href="https://www.ropesgray.com/en/insights/alerts/2026/03/the-white-house-legislative-recommendations-national-policy-framework-for-artificial-intelligence-an">The White House Legislative Recommendations: National Policy Framework for Artificial Intelligence and Federal Preemption of State AI Laws</a></p></li><li><p><a href="https://www.crowell.com/en/insights/client-alerts/white-house-national-ai-policy-framework-calls-for-preempting-state-laws-protecting-children">White House National AI Policy Framework Calls for Preempting State Laws, Protecting Children</a></p></li><li><p><a href="https://www.paulhastings.com/insights/client-alerts/president-trump-signs-executive-order-challenging-state-ai-laws">President Trump Signs Executive Order Challenging State AI Laws</a></p></li><li><p><a href="https://regulations.ai/regulations/RAI-US-NA-ENFORCE-2026">FTC Policy Statement on AI and Section 5 of the FTC Act</a></p></li><li><p><a href="https://openclawai.io/blog/ftc-ai-policy-statement-agent-enforcement/">The FTC Just Dropped Its AI Enforcement Playbook &#8212; And AI Agents Are in the Crosshairs</a></p></li><li><p><a href="https://ourtake.bakerbotts.com/post/102mirs/march-2026-federal-deadlines-that-will-reshape-the-ai-regulatory-landscape">March 2026: Federal Deadlines That Will Reshape the AI Regulatory Landscape</a></p></li><li><p><a href="https://www.nicfab.eu/en/posts/digital-omnibus-ai-trilogue/">EU Digital Omnibus on AI: Council and Parliament Align Mandates as Trilogue Negotiations Begin</a></p></li><li><p><a href="https://www.globalpolicywatch.com/2026/03/meps-adopt-joint-position-on-proposed-digital-omnibus-on-ai/">MEPs Adopt Joint Position on Proposed Digital Omnibus on AI</a></p></li><li><p><a href="https://table.media/en/europe/news-en/ai-omnibus-the-trilogue-begins">EU AI Omnibus: The trilogue begins</a></p></li><li><p><a href="https://www.prokopievlaw.com/post/uk-dsit-publishes-ai-and-copyright-consultation-response-uk-march-2026">UK DSIT Publishes AI and Copyright Consultation Response</a></p></li><li><p><a href="https://www.fieldfisher.com/en/services/intellectual-property/intellectual-property-blog/uk-government-maintains-status-quo-on-ai-and-copyr">UK government maintains status quo on AI and copyright</a></p></li><li><p><a href="https://www.insideglobaltech.com/2026/03/13/uk-government-launches-consultation-on-childrens-online-experiences-including-new-obligations-for-ai/">UK Government Launches Consultation on Children&#8217;s Online Experiences, Including New Obligations for AI</a></p></li><li><p><a href="https://www.akerman.com/en/perspectives/hrdef-ai-in-hiring-emerging-legal-developments-and-compliance-guidance-for-2026.html">HRDef: AI in Hiring: Emerging Legal Developments and Compliance Guidance for 2026</a></p></li><li><p><a href="https://artificialintelligenceact.eu/implementation-timeline/">Implementation Timeline | EU Artificial Intelligence Act</a></p></li><li><p><a href="https://digital-strategy.ec.europa.eu/en/policies/guidelines-gpai-providers">Guidelines for providers of general-purpose AI models</a></p></li></ul><p></p><div><hr></div><h6><em><strong>DISCLAIMER</strong></em></h6><h6><em>The information is intended to be helpful but is in no way a substitute for seeking professional advice for your specific situation or intent. This applies to business, financial, legal, or other matters discussed herein. Please read the full <a href="https://aigovernanceplaybook.substack.com/p/disclaimer">DISCLAIMER</a></em></h6><h6><em><strong>Copyright Notice</strong></em></h6><h6><em>The information in this document is protected by international copyright laws. You may use the information for personal purposes but you are not permitted to duplicate it or distribute it to anyone else. This copy is for you only.</em></h6>]]></content:encoded></item><item><title><![CDATA[Act Now Brief | Monday 23 March 2026 ]]></title><description><![CDATA[For founders and operators | USA &#183; UK &#183; EU]]></description><link>https://www.aigovernanceplaybook.com/p/ai-governance-playbook-23-march-2026</link><guid isPermaLink="false">https://www.aigovernanceplaybook.com/p/ai-governance-playbook-23-march-2026</guid><dc:creator><![CDATA[Andy Wood]]></dc:creator><pubDate>Mon, 23 Mar 2026 14:55:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/cfce2391-4c8c-45f0-8fcd-170c60c74872_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>&#127482;&#127480; USA | FTC just defined what it will enforce on your AI systems</h2><p>The Federal Trade Commission published its Policy Statement on AI and Section 5 of the FTC Act on March 11. This isn&#8217;t a new rule. It&#8217;s the FTC saying how it will use existing law to prosecute AI companies. The statement covers algorithmic discrimination, deceptive AI-generated content, misleading claims about what your AI can do, undisclosed use of AI in marketing, and privacy violations tied to data collection for training.</p><p>Section 5 of the FTC Act already prohibits unfair or deceptive practices. The FTC is now clarifying that this applies to AI. If your system discriminates, misleads, or makes false claims about its capabilities, the FTC will treat it the same way it would treat a deceptive advertisement or discriminatory hiring practice. There are no safe harbors for &#8220;just being AI.&#8221;</p><p>The policy also signals that the FTC will challenge state AI laws that require changes to your AI&#8217;s outputs, saying those requirements are themselves deceptive. This creates a collision risk between state and federal law. Build your product for compliance now, not after enforcement.</p><h4><strong>So what?</strong></h4><p><strong>If you&#8217;re building AI products:</strong> Document what your system does and cannot do. Don&#8217;t claim capabilities it doesn&#8217;t have. Don&#8217;t hide that AI is being used. Test for discrimination. The FTC will look at these things.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use AI for hiring, lending, credit decisions, or customer decisions, document it and monitor it for bias. The FTC treats AI discriminators the same as human discriminators.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients this is immediate. Not a future problem. The FTC policy is live now. Audit for the three key areas: false claims, hidden use, and algorithmic discrimination.</p><h4><strong>Who feels this most:</strong></h4><ul><li><p><strong>Hiring and HR:</strong> If you make or sell AI screening tools, hiring platforms, or scoring systems, the FTC is already building evidence in this space. Your documentation needs to exist before enforcement, not after.</p></li><li><p><strong>Content generation and marketing:</strong> If your AI generates images, video, testimonials, or endorsements, disclosure of AI use isn&#8217;t optional. The FTC treats synthetic media the same as human-created content.</p></li></ul><div><hr></div><h3>&#127466;&#127482; EU | The August 2 deadline is now five months away. Readiness is uneven.</h3><p>The EU AI Act rules for high-risk systems take effect on August 2, 2026. This isn&#8217;t negotiable. Conformity assessments, technical documentation, CE marking, and registration with the EU database must be done by that date for any high-risk system affecting an EU resident.</p><p>On March 18, the European Parliament committees adopted a joint position on the &#8220;Digital Omnibus&#8221; package, which would delay high-risk AI obligations by up to 16 months, conditional on harmonized standards. The Parliament votes on March 26. This may pass. Assume it won&#8217;t. Build for August 2.</p><p>The problem: Only 8 of 27 EU member states have designated their AI Act enforcement authorities. Many member states aren&#8217;t ready. This suggests enforcement will be chaotic and inconsistent at first, but enforcement will happen. If you&#8217;re shipping AI into the EU, you need to be compliant by August 2. Non-compliance exposes you to fines up to &#8364;15 million or 3 percent of global turnover, whichever is higher.</p><h4><strong>So what?</strong></h4><p><strong>If you&#8217;re building AI products:</strong> If your system qualifies as high-risk under Annex III, you need conformity assessment, technical documentation, and CE marking before August 2. If you don&#8217;t have this work 40 percent done, you&#8217;re behind.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you deploy high-risk AI in the EU (hiring, lending, healthcare, criminal justice), you&#8217;re the deployer. You must conduct impact assessments, document risks, and give consumers notice and opt-outs. Start now.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients the Digital Omnibus vote happens March 26. If it passes, tell them. If it doesn&#8217;t, tell them August 2 is real. Either way, they need a compliance timeline by the end of March.</p><div><hr></div><h3>&#127468;&#127463; UK | The ICO is building enforcement cases. Your process documentation is the evidence.</h3><p>The Information Commissioner&#8217;s Office published its AI and Biometrics Strategy update in March 2026. Key sentence: &#8220;Throughout 2026 the ICO will actively monitor advancements and work with AI developers and deployers to ensure they are clear on what the law requires.&#8221;</p><p>Translation: The ICO is gathering evidence now. It&#8217;s engaging with 11 major AI foundation model developers to understand their data protection practices. It&#8217;s building a picture of who is compliant and who isn&#8217;t. When enforcement comes, the ICO will use what it finds in these conversations against you.</p><p>The ICO is also focused on deepfakes, biometric systems, and automated decision-making (hiring, benefits, lending). If you use AI in these spaces, the ICO is watching.</p><p>Documentation matters. The ICO will ask to see your data processing records, your testing for bias, your risk assessments, and your transparency disclosures. If these documents don&#8217;t exist when asked, that&#8217;s an enforcement problem on top of the compliance problem.</p><h4><strong>So what?</strong></h4><p><strong>If you&#8217;re building AI products:</strong> If you&#8217;re a UK AI company or building for UK use, the ICO is looking at your data practices and documentation. Have it ready. Test your systems for bias and document the results.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use AI for hiring, benefits decisions, or automated assessment, the ICO expects you to have documentation showing you know the risks and have mitigated them. Run that audit now.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients the ICO engagement with foundation models isn&#8217;t friendly advice. It&#8217;s the building of an enforcement picture. Get documentation in place before the ICO knocks.</p><div><hr></div><h2>&#128993; Heads up</h2><h3>&#127482;&#127480; USA | White House AI framework signals federal preemption push</h3><p>The White House published its National Policy Framework for Artificial Intelligence on March 20. The framework proposes federal legislation to unify AI policy and explicitly preempt state laws that impose compliance costs the federal government deems excessive.</p><p>This matters because states (California, Colorado, Texas, New York, and Illinois) have AI laws that are already live. The Trump administration is signaling that it will use federal law to override them where it can. The FTC&#8217;s Section 5 policy statement is part of this strategy. The Department of Justice launched a litigation task force to challenge state AI laws on constitutional grounds.</p><p>For small AI companies, this creates uncertainty. Do you build for state laws or federal law first? Build for the strictest rule you will face. Build for the states, and you will be compliant if federal preemption wins. The reverse isn&#8217;t true.</p><p>California&#8217;s AB 1883 (workplace surveillance AI) passed out of committee on March 19. Multiple state hiring bills are advancing. The patchwork is getting worse, not better.</p><h4><strong>So what?</strong></h4><p><strong>If you&#8217;re building AI products:</strong> If you sell into multiple US states, compliance cost is going up, not down. Build for California and Colorado standards unless the litigation task force wins. Plan for litigation. It&#8217;s coming.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use AI for hiring or surveillance in the US, assume state laws will apply. Audit your tools against the strictest state standard in your market.</p><p><strong>If you&#8217;re advising AI companies:</strong> Warn clients that federal preemption is being pursued through litigation and legislation. It will take years to resolve. Design products for state compliance now.</p><h3>&#127466;&#127482; EU | Code of Practice on AI-generated content reaches second draft</h3><p>The European Commission published the second draft Code of Practice on marking and labelling of AI-generated content on March 5. This isn&#8217;t the law yet. It&#8217;s best practice guidance that will likely become law.</p><p>The code addresses the labelling of images, video, text, and audio generated by AI. If you build tools that generate or manipulate synthetic media, this is moving toward you. The code will likely require disclosure, labelling, or both. Assume labelling will be mandatory by the end of 2026.</p><h4><strong>So what?</strong></h4><p><strong>If you&#8217;re building AI products:</strong> If you build image, video, or audio generation tools, plan for mandatory labelling or disclosure of AI-generated content. Check the draft code. Begin thinking about how this works at scale.</p><p><strong>If you&#8217;re using AI in your business:</strong> If you use generative AI to create marketing content, product images, or internal materials, prepare for the possibility that external use will require labelling.</p><p><strong>If you&#8217;re advising AI companies:</strong> Tell clients the labelling code is coming. Early adoption of labelling builds credibility with regulators and reduces friction when rules arrive.</p><div><hr></div><h2>&#128994; On the radar</h2><ul><li><p><strong>California AB 1883 advancing:</strong> Requires disclosure and bias testing of workplace surveillance AI. Passed committee on March 19. Heading to vote. If you build HR or employee monitoring tools, your documentation is soon going to matter more.</p></li><li><p><strong>Arizona HB 2311 chatbot bill:</strong> Requires labels on chatbots and disclosure of limitations. Set for March 23 hearing. Small impact for now, but Arizona isn&#8217;t the last state to move this direction.</p></li><li><p><strong>Tennessee and Vermont mental health AI bills:</strong> Both states advanced legislation prohibiting AI systems from claiming to be qualified mental health professionals. The Tennessee House passed on March 16, Vermont House passed on March 18. Narrow but growing precedent.</p></li><li><p><strong>UK Data Use and Access Act report on AI and copyright:</strong> The UK government published its Copyright and AI Impact Assessment on March 18. Conclusion: wait and see. No immediate reform of copyright law for text-and-data mining. This means uncertainty continues for training data sourcing in the UK. Don&#8217;t assume reform is coming soon.</p></li><li><p><strong>CEO and board liability rising:</strong> Insurers now require risk modelling and board certification of AI governance before coverage. If you have AI in your business and no governance structure, you have an insurance problem. This isn&#8217;t enforcement yet, but it&#8217;s a leading indicator.</p></li></ul><div><hr></div><h2>The one thing to do this week</h2><p>Read the FTC Policy Statement on AI and Section 5 of the FTC Act (published March 11, 2026). It&#8217;s short. It&#8217;s readable. It&#8217;s the enforcement policy, not the background. Highlight the sections on deceptive claims, undisclosed use, and discrimination. Then audit your product or your use of AI against those three things. If you can&#8217;t document that you have done this audit, you&#8217;re exposed. Do it this week.</p><div><hr></div><h2>Deadline tracker</h2><p><strong>&#127466;&#127482; EU</strong> | High-risk AI systems must be compliant with EU AI Act rules (conformity assessment, CE marking, registration) | August 2, 2026 | Five months away. Only 8 of 27 member states are ready. Assume the deadline holds.</p><p><strong>&#127466;&#127482; EU</strong> | European Parliament votes on Digital Omnibus (proposed extension of high-risk AI deadline) | March 26, 2026 | One week away. Likely to pass but uncertain. Don&#8217;t count on a delay.</p><p><strong>&#127482;&#127480; USA</strong> | Colorado AI Act enforcement begins | June 30, 2026 | Three months away. Applies to high-risk systems affecting employment, housing, credit, health, and education.</p><p><strong>&#127482;&#127480; USA</strong> | California AB 853 (delayed AI transparency rules) now due | August 2, 2026 | Delayed from January. Requires training data summary for AI systems trained on 1M+ data points.</p><p><strong>&#127468;&#127463; UK</strong> | ICO finishes engagement with 11 foundation model developers | TBD 2026 | Findings will shape enforcement priorities. No set deadline yet.</p><p><strong>&#127468;&#127463; UK</strong> | UK government to develop statutory ADM Code of Practice | TBD 2026 | Secondary legislation required. Will provide practical guidance on automated decision-making, but timing is uncertain.</p><div><hr></div><p>Sources:</p><ul><li><p><a href="https://regulations.ai/regulations/RAI-US-NA-ENFORCE-2026">FTC Policy Statement on AI and Section 5 of the FTC Act</a></p></li><li><p><a href="https://www.ropesgray.com/en/insights/alerts/2026/03/the-white-house-legislative-recommendations-national-policy-framework-for-artificial-intelligence-an">White House National Policy Framework for Artificial Intelligence</a></p></li><li><p><a href="https://ico.org.uk/about-the-ico/our-information/our-strategies-and-plans/artificial-intelligence-and-biometrics-strategy/ai-and-biometrics-strategy-update-march-2026/">ICO AI and Biometrics Strategy Update March 2026</a></p></li><li><p><a href="https://artificialintelligenceact.eu/implementation-timeline/">EU AI Act Implementation Timeline</a></p></li><li><p><a href="https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai">EU Code of Practice on Marking and Labelling of AI-generated Content Second Draft</a></p></li><li><p><a href="https://www.morganlewis.com/pubs/2026/04/ai-enforcement-accelerates-as-federal-policy-stalls-and-states-step-in">AI Enforcement Accelerates as Federal Policy Stalls and States Step In</a></p></li><li><p><a href="https://www.harrisbeachmurtha.com/insights/ai-assisted-hiring-in-2026-managing-discrimination-risk/">AI Enforcement in Hiring Tools 2026</a></p></li><li><p><a href="https://www.loc.gov/item/global-legal-monitor/2026-04-08/united-kingdom-government-adopts-wait-and-see-approach-to-regulating-ai-and-copyright">UK Government Report on Copyright and Artificial Intelligence</a></p></li><li><p><a href="https://ico.org.uk/about-the-ico/what-we-do/legislation-we-cover/data-use-and-access-act-2025/the-data-use-and-access-act-2025-what-does-it-mean-for-organisations/">Data Use and Access Act 2025 Implications for Organizations</a></p></li><li><p><a href="https://www.ceotodaymagazine.com/2026/01/ai-liability-in-the-boardroom-what-every-ceo-must-know-in-2026/">CEO and Board AI Liability in 2026</a></p></li><li><p><a href="https://openclawai.io/blog/ftc-ai-policy-statement-agent-enforcement/">The FTC Just Dropped Its AI Enforcement Playbook</a></p></li></ul><p></p><div><hr></div><h6><em><strong>DISCLAIMER</strong></em></h6><h6><em>The information is intended to be helpful but is in no way a substitute for seeking professional advice for your specific situation or intent. This applies to business, financial, legal, or other matters discussed herein. Please read the full <a href="https://aigovernanceplaybook.substack.com/p/disclaimer">DISCLAIMER</a></em></h6><h6><em><strong>Copyright Notice</strong></em></h6><h6><em>The information in this document is protected by international copyright laws. You may use the information for personal purposes but you are not permitted to duplicate it or distribute it to anyone else. This copy is for you only.</em></h6>]]></content:encoded></item></channel></rss>