Act Now Brief | Monday 30 March 2026
For founders, operators, and creators who are using AI | USA · UK · EU
🇺🇸 USA | FTC just proved it will enforce its AI policy statement with real penalties
On March 30, the FTC announced a settlement with OkCupid and Match Group Americas over undisclosed data sharing for AI training. The company gave an AI firm access to 3 million user photos and location data without consent. OkCupid’s privacy policy did not mention this sharing. The FTC treated this as a deceptive practice under Section 5 of the FTC Act, the same statute the Commission flagged on March 11 as its enforcement tool for AI.
This isn’t a theoretical threat. This is the FTC doing what it said it would do. The settlement bars OkCupid from misrepresenting data practices for 20 years and requires compliance reporting for 10 years. No admission of liability, but the path to future penalties is now open. This matters because OkCupid was doing what many companies do: sharing user data with third parties to train models. If your business model involves data sharing with AI partners, and your users don’t explicitly consent, you’re exposed right now.
So what?
If you’re building AI products: If you acquire training data through partnerships that involve user data, your privacy disclosures must explicitly list the partners and the use. Silence is now a prosecutable offense.
If you’re using AI in your business: If you share employee, customer, or user data with AI vendors or third parties, audit your contracts and privacy disclosures this week. Undisclosed sharing is now an FTC enforcement priority.
If you’re advising AI companies: Tell clients the FTC just showed its hand. Data sharing for AI training is under Section 5 scrutiny now. Get explicit disclosure and consent mechanisms in place before you touch user data.
Who feels this most:
Data and AI platforms: If you use user data for model training or fine-tuning, you need explicit consent for each use case and each third party you share with. The absence of disclosure is now an enforcement liability.
Dating, social, and consumer apps: You’re collecting rich personal data. If any of it flows to third parties without explicit disclosure, you’re the next case study.
🇪🇺 EU | Still no movement on Digital Omnibus. Build for August 2.
The Digital Omnibus vote happened on March 26 as scheduled. It didn’t pass. This means the August 2, 2026 deadline for EU AI Act high-risk compliance remains in place. No delay. No extension. Same deadline as last week. Same urgency as last week.
Only 8 of 27 EU member states have designated their AI Act enforcement authorities. This creates enforcement chaos at first, but enforcement will happen. If you’re shipping high-risk AI into the EU (hiring, lending, healthcare, criminal justice), you need conformity assessment, technical documentation, and CE marking by August 2. That’s four months away. If you haven’t started this work, you’re months behind.
So what?
If you’re building AI products: If your system qualifies as high-risk under Annex III, your compliance timeline is real. August 2 isn’t negotiable. Get your conformity assessment scope defined this week.
If you’re using AI in your business: If you deploy high-risk AI in the EU, you’re liable for impact assessments, risk documentation, and consumer notice. You can’t delegate compliance to the vendor. Do your audit this week.
If you’re advising AI companies: Tell clients the Digital Omnibus didn’t pass. August 2 is the deadline. Build a working backwards timeline from that date. You need compliance scope, vendor capacity, and testing done by mid-June.
🇬🇧 UK | FCA has named AI financial chatbots as an active enforcement risk
On 26 March, the Financial Conduct Authority (FCA) published its latest perimeter report, explicitly flagging AI-powered personal finance tools and chatbots that offer what amounts to regulated financial advice without authorisation. The FCA’s perimeter reports are how the agency signals where formal enforcement attention is heading. This is not a consultation, and it isn’t speculative.
The risk is specific. An AI tool positioned as “guidance” that ends up recommending a specific product, summarising a pension’s exit fees, or suggesting a fund crosses into regulated advice under the Financial Services and Markets Act 2000 (FSMA). The firm that deployed it carries the liability. Consumer Duty obligations add a second layer: if your AI produces a hallucinated rate of return and a customer acts on it, you’re exposed regardless of how your terms of service describe the product. The FCA has said that unsupervised generative AI should not be used for substantive financial communications.
So what?
If you’re building AI products: If your product touches personal finance in the UK, check this week whether any output it produces could be read as a specific financial recommendation. The FCA has now explicitly named this category as enforcement territory.
If you’re using AI in your business: If you’re using an AI tool that produces anything a customer might construe as financial guidance, document how you’re supervising its outputs. “We didn’t know” is not a Consumer Duty defence.
If you’re advising AI companies: Tell fintech clients that the March 26 perimeter report is their signal to get their FSMA and Consumer Duty mapping done before the FCA launches a thematic review into this space.
Who feels this most:
Fintech and wealthtech: You’re the named category. The FCA has told you it’s looking here. The burden of demonstrating you’re on the right side of the regulated/unregulated line is yours.
HR and benefits tools: If your AI helps employees understand pension or salary sacrifice options, you may be closer to the advice boundary than you think.
🇬🇧 UK | ICO still in engagement phase. Documentation remains your shield.
The Information Commissioner’s Office (ICO) remains in the monitoring and engagement phase. No new enforcement actions this week. Still building its picture of data protection practices at foundation model developers and deployers. When the ICO launches enforcement cases, your documentation will be the evidence that proves or disproves your compliance.
Have bias testing results. Have data processing records. Have impact assessments. Have risk documentation. Have transparency disclosures. If the ICO asks and you have to say these documents don’t exist, you’ve compounded your compliance problem with an enforcement problem.
So what?
If you’re building AI products: If you’re a UK AI company or building for UK use, document your testing, bias assessments, and risk mitigation now. These are the artifacts that shield you in enforcement.
If you’re using AI in your business: If you use AI for hiring, benefits, or automated decisions, run an audit of your documentation this week. The ICO will ask. Be ready to show it.
If you’re advising AI companies: Tell clients the ICO engagement with foundation models is building toward enforcement. The question isn’t whether to document, it’s whether to do it now or after the ICO knocks. Do it now.
🟡 Heads up
🇺🇸 USA | Oregon and Washington just created private liability for AI companion chatbots
Oregon SB 1546 (signed March 31) and Washington HB 2225 (signed March 24) are now law. Both regulate AI systems designed to simulate sustained human relationships through natural language conversation. This includes most consumer-facing LLMs with personalization features.
Both states require disclosure that the AI isn’t human (every hour for minors, every three hours for adults). Both prohibit manipulative tactics like faking emotional distress when a user tries to leave. Oregon adds a private right of action: $1,000 per violation per user, plus attorney fees for plaintiffs. Washington relies on attorney general enforcement only.
The private right of action is the enforcement mechanism you need to worry about. One class action lawsuit from users in Oregon could cost you millions. If you sell consumer-facing AI into these states, you need to be compliant before January 1, 2027 when these laws take effect. Start assessing your disclosure and safeguard mechanisms now.
So what?
If you’re building AI products: If you build conversational AI that could be characterized as sustaining a relationship, audit your disclosure practices and safeguard rules for Oregon and Washington compliance. You have nine months.
If you’re using AI in your business: If you deploy chatbots or conversational AI systems that interact with consumers, check whether you need to comply with Oregon and Washington rules. Minimum: disclosure and no manipulative tactics.
If you’re advising AI companies: Tell clients Oregon and Washington just created private liability. This is different from Federal Trade Commission enforcement or state AG action. A class of users suing for statutory damages is a real business risk. Plan for compliance.
Who feels this most:
Consumer AI and conversational AI: You’re building the exact systems these laws target. Check your disclosure mechanisms and safeguard rules before January 1, 2027.
Dating and social platforms: If your app has features that simulate relationships or use personalization to sustain engagement, Oregon and Washington have you in their sights.
🇺🇸 USA | California AB 1883 advances. Workplace surveillance AI is being regulated.
California AB 1883 passed committee 5-0 on March 19. The bill prohibits employers from using AI surveillance tools to infer protected characteristics about workers (race, gender, disability status, etc.) or to use predictive behavior analysis on workers. Violations carry penalties of up to $500 per employee per incident. The bill is still in the legislature, but the margin in committee signals it will likely pass.
This is separate from the broader California workplace AI bills (AB 1898 on notice/disclosure, SB 947 on automated hiring decisions). If you build HR tech, employee monitoring tools, or worker scoring systems, California is now regulating three different aspects of your product. The patchwork’s expanding, not shrinking.
So what?
If you’re building AI products: If you build HR tech, hiring tools, or employee monitoring systems, assume California will regulate you. Plan for both notice/disclosure requirements and restrictions on inference and predictive behavior analysis.
If you’re using AI in your business: If you use AI for hiring, performance management, or employee monitoring in California, your vendors may soon face legal restrictions. Start asking vendors what they’re doing to comply with AB 1883.
If you’re advising AI companies: Tell clients California is regulating AI surveillance and bias in three separate bills. Build for the strictest rule. The private right of action in these bills means class action risk, not just regulator enforcement.
🟢 On the radar
Colorado AI Act still set for June 30 enforcement: No change since last week. Unless the legislature amends it further, high-risk AI systems affecting employment, housing, credit, and healthcare will require compliance by June 30. That’s three months away.
White House federal preemption litigation task force moving forward: The Department of Justice (DOJ) is actively challenging state AI laws on constitutional grounds. This will take time to resolve. Build for state compliance now and don’t assume federal preemption arguments will succeed in court.
Arizona HB 2311 chatbot bill moving: Arizona House set for hearing on chatbot labeling and disclosure. Small scope, but part of a growing trend of states regulating how you describe your AI’s capabilities.
UK ADM Code of Practice still in development: The ICO will develop statutory guidance on automated decision-making (hiring, benefits, lending, etc.), informed by its ongoing engagement with foundation model developers. No deadline yet, but expect it later this year.
The one thing to do this week
Two things tied for equal urgency this week, so pick the one that fits your business. If you’re in UK fintech or any product that touches personal finance, read the FCA’s March 26 perimeter report and map your outputs against the regulated/unregulated advice line. If you’re not in fintech, check your user-facing privacy policy against the FTC’s OkCupid settlement: does it name every third party that receives user data, every use case, and every AI partner you share with? If it doesn’t, fix it before the FTC comes to you.
Deadline tracker
🇪🇺 EU | High-risk AI systems must be compliant with EU AI Act rules (conformity assessment, CE marking, registration) | August 2, 2026 | Four months away. Digital Omnibus didn’t pass. Assume the deadline holds.
🇺🇸 USA | Colorado AI Act enforcement begins | June 30, 2026 | Three months away. High-risk systems in employment, housing, credit, health, and education.
🇺🇸 USA | Oregon SB 1546 and Washington HB 2225 take effect (AI companion chatbots) | January 1, 2027 | Nine months away. Private right of action in Oregon ($1,000 per violation).
🇺🇸 USA | California AB 1883 vote in legislature | TBD 2026 | Passed committee 5-0 March 19. High likelihood of passage. Workplace surveillance, AI restrictions, and bias inference prohibitions.
🇺🇸 USA | California AB 853 (delayed AI transparency rules) now due | August 2, 2026 | Requires training data summary for AI systems trained on 1M+ data points.
🇬🇧 UK | ICO finishes engagement with 11 foundation model developers | TBD 2026 | Findings will shape enforcement priorities. Expect enforcement cases to follow.
🇬🇧 UK | UK government to develop statutory ADM Code of Practice | TBD 2026 | Secondary legislation required. Will provide guidance on automated decision-making. Timing uncertain.
Sources:
FTC Takes Action Against Match and OkCupid for Deceiving Users on Data Sharing
White House National Policy Framework for Artificial Intelligence
