What's Building | Thursday 2 April 2026
For founders, operators, and creators who are using AI | USA · UK · EU
On the watchlist
Things moving through regulatory pipelines that will matter in the next 3 to 6 months
🇺🇸 USA | Federal preemption and the state law showdown
Status: 🔴 Watch — The White House just picked a fight it may lose, and your state laws are now in legal limbo
What is happening
As we covered last week, the White House released a National Policy Framework on 20 March 2026 proposing broad federal preemption of state AI laws. The framework directs the Attorney General to challenge state laws in federal court, particularly those requiring “alterations to truthful outputs” of AI models or imposing liability on developers. Since then, Congress has rejected comparable preemption in the National Defense Authorization Act, and states have pushed back hard.
An NPR analysis from 28 March shows states refusing to back down despite the Trump administration’s preemption push, with Colorado, California, and Texas already in the litigation pipeline. The Commerce Department report flagging burdensome state laws remains the administration’s foundation for legal challenges, but the statutory framework for preemption is weaker than it was when the brief was filed.
Why founders should care
This battle is still moving in slow motion, but state laws aren’t freezing in place while it plays out. Colorado’s impact assessment deadline moves to June 2026. New York City’s bias audit requirement is already live. California just added a new requirement (see below). The federal government is betting it can use Dormant Commerce Clause theory to kill state laws in federal court, but Congress has already rejected this framing. If you operate across state lines, assume state laws are binding and budget for them accordingly. The preemption fight isn’t a “wait and see” situation for your product roadmap.
So what?
If you’re building AI products: If you sell across state lines, don’t wait for federal preemption to succeed. Colorado’s impact assessment deadline is June 2026, California is adding new requirements (effective 1 July 2026), and NYC is live now. Build for the hardest state you operate in, starting with Colorado by May 2026.
If you’re using AI in your business: State laws still apply to how you use AI in hiring, lending, and employment decisions. If you operate in Colorado, NYC, or California, audit your systems against their specific requirements (impact assessments, bias audits, and advance notice to candidates). Do this by May 2026 for Colorado and California deadlines.
If you’re advising AI companies: Tell clients that federal preemption is uncertain but state law is certain. Help them audit which states they operate in, what those states require, and what the actual deadline is for each. If they’re using AI for hiring or lending in multiple states, they need a compliance map by May 2026.
Who feels this most
HR / Hiring: New York City requires bias audits and advance notice now. Illinois allows candidates to sue directly if they believe they were discriminated against via AI hiring tools, without filing a complaint first. Colorado’s impact assessment deadline is June 2026. California’s new AI executive order (below) extends to hiring tools. Multi-state hiring tools must comply with all of these. Start with NYC and Colorado.
Financial Services: Multi-state lenders must assume state AI laws remain binding. The Federal Reserve updated model risk management guidance for AI systems in February, and state attorneys general are watching for discrimination in credit scoring. If you approve loans across state lines, assume you need impact assessments in Colorado, California, and any state where you originate credit.
Retail & AdTech: Pricing and advertising algorithms are not yet explicitly regulated at state level, but California’s new executive order focuses on government procurement standards. If you sell to California state agencies, this affects you directly.
🇺🇸 USA | California expands AI government procurement standards
Status: 🟡 Watch — California just made AI companies’ lives harder if they want to sell to the state
What is happening
On 31 March 2026, California Governor Gavin Newsom signed Executive Order N-5-26, directing state agencies to develop new standards for AI companies seeking to contract with the state. The order requires agencies to create procurement requirements for AI systems, including responsible use guidelines for generative AI in government operations. This applies to any AI product sold to or used by California state agencies, from HR tools to data analytics platforms. The executive order also directs state agencies to expand responsible use of generative AI internally, with guidelines on transparency and accountability.
Unlike state AI laws (which face federal preemption challenges), procurement standards are a different animal legally. They function as a market requirement, not a regulation. If you want California’s business, you’ll need to meet their procurement standards.
Why founders should care
This isn’t a regulation yet. It’s a procurement requirement, which means it only applies if you want to sell to California state agencies. But California is a massive buyer of software and services. If your product is even tangentially AI-related and you want California state business, you need to pay attention. The standards are still being written (agencies have been directed to develop them), so the exact requirements aren’t yet public. But Newsom’s focus is on transparency, responsible use, and safety.
If you’re selling hiring, analytics, or decision-making tools to California agencies, expect questions about how your system works, how you test for bias, and what safeguards you have in place. This will also set a precedent for other states to do the same.
So what?
If you’re building AI products: If you sell to government agencies in California (or other states following its lead), start tracking what procurement standards are being written. Newsom’s order directs agencies to create standards, but the final requirements aren’t public yet. Monitor the California government procurement office for updates starting in Q2 2026.
If you’re using AI in your business: If your state or local government is using AI, this doesn’t directly affect you unless you’re selling to them. If you are, you’ll need to meet new procurement standards. For most small teams, this is a low priority unless you’re a govtech company.
If you’re advising AI companies: Tell clients selling to government agencies in California or other states that procurement standards are becoming a compliance requirement alongside regulation. These are often stricter than regulations because government buyers can set their own requirements. If a client sells hiring, analytics, or decision-making tools to the government, they should start planning for procurement requirements now.
Who feels this most
HR / Hiring: California state agencies use hiring tools. If you sell or provide these tools to the California government, you’ll need to meet new procurement standards. Most will focus on bias testing, transparency, and non-discrimination safeguards.
Retail & AdTech: Pricing and targeting algorithms used by California state agencies (if any) will be subject to procurement review. Most AdTech does not sell directly to the government, so this is low priority unless you work in the govtech space.
🇦🇪 FTC’s Section 5 enforcement playbook is live
Status: 🟡 Watch — The FTC just gave itself a blank check to enforce AI practices you may not know are illegal
What is happening
As we noted last week, the Federal Trade Commission published its Policy Statement on AI and Section 5 of the FTC Act on 11 March 2026. The statement doesn’t introduce new rules but interprets the 100-year-old Section 5 prohibition on unfair and deceptive practices to apply directly to AI systems. The FTC signals enforcement priorities: algorithmic discrimination, deceptive AI-generated content, privacy violations tied to AI data collection, non-transparent automated decision-making, and false or misleading claims about AI safety. Since the policy dropped, the FTC has been laying the groundwork for enforcement cases. No major enforcement actions have been filed yet, but the targeting is explicit.
Why founders should care
If you generate content, use AI to make decisions about people, train models on data you collected without clear consent, or claim your AI is safe or unbiased, you’re in the FTC’s perimeter. The Section 5 standard is broad, which gives the FTC room to act but also means the boundaries aren’t crystal clear. Expect enforcement action in the next 6 to 12 months, particularly against companies making safety or fairness claims they can’t substantiate. The FTC’s approach is to use existing authority rather than wait for new legislation. This is faster and gives the agency more discretion.
So what?
If you’re building AI products: Audit your content generation, disclosure practices, and data sourcing now. The FTC will challenge any claim about AI that you can’t prove. If you generate images, text, or audio, label it. If you make safety claims, document them. Document data sourcing and user consent for training. Do this audit by May 2026.
If you’re using AI in your business: If you use AI to screen candidates, approve credit, detect fraud, or make decisions affecting people, document that you’ve tested the system for discrimination. The FTC is enforcing anti-discrimination law through the AI lens. If you don’t have evidence that your system doesn’t discriminate, fix it.
If you’re advising AI companies: Tell clients that the FTC will challenge any AI claim they can’t prove. Treat AI systems like consumer products: disclosed features, tested safety claims, transparent decision-making. If a client makes any claim about fairness or safety, they need documentation to back it up.
Who feels this most
Media & Publishing: If you generate images, deepfakes, or synthetic media, the FTC is now explicitly watching for deceptive labeling. Label all generated content clearly. Don’t make unlabeled deepfake ads.
Financial Services: The FTC and Consumer Financial Protection Bureau are coordinating on algorithmic discrimination in lending. If you use AI to score credit, price loans, or detect fraud, document non-discrimination testing.
HR / Hiring: If you use AI to screen resumes, conduct interviews, or rank candidates, the FTC is watching for discrimination. Documentation of fairness testing is now your first line of defense.
🇪🇺 EU | Digital Omnibus trilogue accelerating, and member states must launch sandboxes by August
Status: 🔴 Watch — EU AI Act timelines just locked in, and compliance deadlines are now fixed through 2028. Member states must open regulatory sandboxes by 2 August 2026
What is happening
As covered last week, trilogue negotiations began after Parliament confirmed its position on 26 March. The next trilogue session is scheduled for 28 April 2026, with reports indicating political negotiations are already intense. Separately, each EU member state is required by the AI Act to establish at least one AI regulatory sandbox by 2 August 2026. The sandbox is a controlled environment where AI systems can be tested with regulatory guidance before market release.
The Commission published draft implementing guidance in December 2025 and requested feedback by January 2026. Sandboxes are optional for companies (you don’t have to use one), but they provide liability protection if you follow sandbox guidelines in good faith. For the first time, member states must announce their sandbox structures, hosting organizations, and application processes by mid-2026.
Why founders should care
Two separate pressures are now on schedule. First, the Digital Omnibus trilogue is moving fast. Agreements are expected by late April or May 2026, with final compliance deadlines confirmed: Annex III systems (hiring, credit, biometrics) by 2 December 2027, Annex I systems (foundational models) by 2 August 2028. These dates are now locked in across EU institutions and won’t change significantly in trilogue. Second, EU member states must have sandboxes open by 2 August 2026.
If you’re a founder building AI in Europe, you now have a choice: wait for sandboxes to open (August 2026) and test under modified rules, or start compliance work now under the standard AI Act regime. Sandboxes offer liability protection if you follow their guidance, but they require approval and participation. For most small teams, waiting for your national sandbox to open makes sense. For teams operating across multiple EU countries, you can pick the sandbox with the best terms for your use case.
So what?
If you’re building AI products: If you sell to the EU and your product makes decisions about hiring, credit, or other high-risk use cases, your hard deadline is 2 December 2027 (16 months away). Check whether your member state has announced its sandbox yet (most will by July 2026). If your system qualifies, applying for sandbox participation in Q3 2026 gives you liability protection during the compliance period. If not, start conformity assessment planning now.
If you’re using AI in your business: If you use AI in the EU for hiring, lending, or decisions affecting people, assume you’re subject to the AI Act as a user. You may need to ensure the systems you buy are compliant by December 2027. Track whether your AI vendor has a compliance plan.
If you’re advising AI companies: Tell clients with EU exposure that December 2027 is now a locked-in deadline across EU institutions. No extension is coming. If they sell HR, credit, or insurance products to the EU, they have 16 months and should start conformity assessment immediately. Sandbox participation (launching August 2026) is optional but offers liability protection. Help clients understand which member state’s sandbox is relevant to their use case.
Who feels this most
HR / Hiring: Your EU deadline is 2 December 2027. Hiring tools are explicitly high-risk. Start documentation, fairness testing, and impact assessment immediately. Most member states will announce their sandboxes by July 2026. If your system is hiring-focused, apply for sandbox participation by Q4 2026.
Financial Services: Credit scoring and loan approval are high-risk. Same 2 December 2027 deadline. Multi-state banks should assume this deadline is non-negotiable.
Healthcare: Diagnostic and treatment recommendation systems are likely high-risk. If you sell to EU hospitals, classify your system immediately. High-risk systems must comply by December 2027.
🇬🇧 UK | ICO publishes report and consultation on automated hiring decisions
Status: 🟡 Watch — The UK has moved from hands-off to hands-on on hiring AI, with a consultation open until 29 May 2026
What is happening
On 31 March 2026, the Information Commissioner’s Office published a report on automated decision-making in recruitment, based on evidence from over 30 employers and public perception research. The report finds that many employers don’t acknowledge they’re using automated decision-making, and therefore fail to put in place safeguards like transparency, bias monitoring, and accountability measures. The ICO is consulting on draft guidance until 29 May 2026.
The key finding: most employers think they’re using decision support (human-in-the-loop), but evidence shows they’re using automated decision-making with no meaningful human involvement. The ICO expects meaningful human involvement to be genuine and active, not a rubber stamp. The consultation covers data protection impact assessments, bias testing, transparency with candidates, and recourse rights.
Why founders should care
This signals a major shift in UK regulatory approach. The government previously favored a “pro-innovation, non-statutory” stance on workplace AI. This report ends that. The ICO is now saying that if you use AI to make hiring decisions in the UK, you must test for bias, be transparent with candidates, and give them a genuine right to human review. The consultation is open until 29 May, which means the final guidance will land in Q2-Q3 2026.
If you sell hiring tools to the UK or use AI for hiring in the UK, this is your compliance roadmap for the next 12 months. The ICO’s tone is notably stricter than previous guidance. This isn’t a suggestion. It’s an enforcement direction.
So what?
If you’re building AI products: If you sell hiring tools to the UK, assume the ICO’s final guidance (landing mid-2026) will become your compliance floor. You’ll need: bias testing (ideally monthly), transparency features (tell candidates how the system works), and genuine human review rights (not a rubber stamp). Start building these features now if you don’t have them. The consultation closes on 29 May, so the final guidance lands in Q3 2026.
If you’re using AI in your business: If you use AI to screen candidates in the UK, audit your system now. Do you have evidence of bias testing? Can candidates see how you screened them? Can they request human review? If the answer to any of these is no, fix it by May 2026.
If you’re advising AI companies: Tell UK-based clients that the ICO has moved from advisory to enforcement mode on hiring AI. The consultation (closes 29 May) is your chance to shape the final rules. If your clients build hiring tools or use them to screen candidates, respond to the consultation if you want to influence the outcome. Final guidance lands in Q3 2026.
Who feels this most
HR / Hiring: This is directly about you. The ICO expects meaningful human involvement, regular bias testing, and transparent communication with candidates. If you use AI to screen resumes or conduct interviews in the UK, document your bias testing and human review process now. The consultation closes on 29 May. If you build hiring tools, respond to the consultation.
🇬🇧 UK | Copyright and AI policy remains frozen, but the door is cracking open
Status: 🟡 Watch — The UK government backed away from its own opt-out proposal, which means copyright and AI in the UK is now a live policy question with no clear answer
What is happening
As we covered last week, the UK Department for Science, Innovation and Technology published its response to a copyright and AI consultation on 19 March 2026. The government abandoned its previous opt-out proposal (allow AI companies to train on copyright works, but let rights holders opt out) after overwhelming rejection by creative industries. The government now has no preferred policy and is adopting a “wait and see” approach, monitoring litigation (Getty Images v Stability AI in the US, similar cases in the UK), international developments, and market dynamics. No new progress this week, but the policy framework remains open.
Why founders should care
The UK copyright question is genuinely unresolved. The government has rejected its own best idea and is waiting for courts and market forces to resolve the issue. If you’re training models on copyright material, you’re technically violating UK copyright law (there’s no AI-specific exception for training). The government isn’t enforcing this aggressively right now, but that could change if courts rule against AI companies. For now, you have a grace period, but it’s not a long one. By late 2026 or 2027, the government is likely to revisit this question, possibly with a different answer based on court decisions.
So what?
If you’re building AI products: If you train generative models on copyright material and serve UK customers, you’re technically not covered by a copyright exception (unlike some EU countries). You’re in a legal gray zone. The government isn’t enforcing it now, but courts may clarify by late 2026. Monitor the Getty case and any UK copyright litigation.
If you’re using AI in your business: If you use generative AI trained on copyright material (almost all large models do this), understand that the UK government’s position is “we don’t know yet.” This is lower risk than the EU AI Act, but not zero risk. No action needed now, but track court cases.
If you’re advising AI companies: Tell clients with UK copyright exposure that the government has paused reform and is waiting for courts and international developments. This buys time, but it’s not a permanent solution. Copyright risk in the UK is lower than in the EU but higher than in the US.
Who feels this most
Media & Publishing: If you train AI models on published works, the UK government’s wait-and-see approach means you’re in a policy limbo that could swing either way. The EU is negotiating stronger copyright protections for creators (as part of the Digital Omnibus trilogue); the UK is waiting. If you’re a model builder, the UK is currently less restrictive than the EU, but that could change.
🇬🇧 UK | Children’s online safety regulations now include AI obligations
Status: 🟡 Watch — A consultation launched on 2 March with tough new rules for AI chatbots and generative AI services, open for comment until 26 May
What is happening
As covered last week, the UK Department for Science, Innovation and Technology launched a consultation on 2 March 2026 called “Growing up in the online world: a national conversation.” The consultation includes explicit obligations for AI chatbots and generative AI services. Proposed measures include stronger age assurance mechanisms, a potential statutory minimum age for social media, raising the UK’s age of digital consent (currently 13), restrictions on features like livestreaming and disappearing messages, and new obligations for AI chatbots and generative AI to protect children.
The consultation closes on 26 May 2026. On 12 March, the ICO and communications regulator Ofcom issued coordinated letters to social media and video platforms demanding compliance action beyond self-declaration.
Why founders should care
If you’re building a chat product, a generative AI tool accessible to children, or any service that might attract users under 16, this consultation is directly about you. The UK is moving toward age-gating and content safety obligations for AI. The consultation is broad, which means the final rules could be narrower or broader depending on feedback.
For now, treat this as a signal: UK child safety regulation for AI is coming, probably within 12 to 18 months after consultation closes. The consultation closes 26 May, so you have time to respond if your product affects children.
So what?
If you’re building AI products: If your AI product is accessible to anyone under 16 in the UK, think now about age verification and age-appropriate safeguards. The consultation closes 26 May. If you have UK customers, respond to the consultation if you want to shape the final rules. Implementation timeline is likely 2027 at the earliest.
If you’re using AI in your business: If you use AI chatbots or generative AI services in the UK, track this consultation. If final rules require age verification or content filtering, you’ll need to ensure your AI provider complies. Most small teams, this is low priority right now. Watch for final guidance in Q3 2026.
If you’re advising AI companies: Tell UK-based clients that child safety regulation for AI is coming. The consultation (closes 26 May) is your window to shape it. If your product serves children or could serve children, respond to the consultation. Final rules likely land in late 2026 or early 2027.
Who feels this most
EdTech: If you build educational AI products used by students under 16 in the UK, this consultation is directly about you. Age verification and content safety will be mandatory. Start thinking now about how to implement age checks and protect minors from inappropriate outputs.
Probably noise
Developments unlikely to affect founders in the next 12 months, and why
🇪🇺 EU | GPAI Code of Practice finalized: The EU AI Office published the final GPAI (general-purpose AI) Code of Practice on 5 March 2026. This is soft law, not binding, and sets out voluntary best practices for AI model providers on transparency, copyright, and safety testing. It’s not an enforcement mechanism for small teams right now. If you’re a model provider (building Claude-like systems), pay attention. If you’re using AI in your business or building customer products, this is background noise for now.
🇪🇺 EU | EU member states report on AI sandboxes: Over the past week, several EU member states announced their sandbox implementations. Germany, France, Spain, and others are opening regulatory sandboxes for AI testing. This is good news for founders but does not create new compliance obligations. Sandbox participation is optional. If you operate in a specific EU country and want liability protection, check that country’s sandbox announcement. Most announcements will be complete by July 2026.
🇬🇧 UK | AI Growth Lab sandboxes delayed: The UK DSIT opened consultation on its AI Growth Lab (regulatory sandboxes) in October 2025, with responses due 2 January 2026. No new updates in the past week. This remains a policy proposal with no confirmed timeline. It’s interesting but not actionable yet. Revisit if the government announces a launch date.
The pattern this week
The regulation wave is accelerating across all three jurisdictions, and the pattern is clear: governments are moving from “should we regulate?” to “here is how you comply.” The UK ICO just published enforcement expectations for hiring AI. California just added government procurement standards. The EU has locked in compliance deadlines for 16 months from now. The US is fighting in court over state laws while the FTC uses existing authority to enforce AI practices. None of these developments are stopping. All of them are moving toward enforcement by mid-2026.
The most important shift this week is the UK’s move from advisory to enforcement on hiring AI. The ICO report signals that the government’s hands-off period on workplace AI is over. The consultation until 29 May gives companies a window to shape the final rules, but the direction is set: meaningful human involvement, bias testing, and transparency. This is the same pattern we saw in the EU with the AI Act and in US states with impact assessments. The regulatory floor is rising everywhere.
California’s government procurement order is a model that other states will likely copy. Procurement standards are harder to challenge legally than regulations (they are not rules about how commerce works, they are rules about how the government buys software), and they create a market incentive for AI companies to meet safety and transparency standards. Expect more states to do this.
The EU’s sandbox requirement (due 2 August 2026) creates a test environment where small teams can get liability protection if they follow sandbox guidelines. This is valuable, but it’s also a signal that the EU is serious about enforcement. Sandboxes exist so companies have a place to fail safely. Once sandboxes are open, there won’t be any excuse for not knowing the rules.
Sector getting the most heat this week
Sector: HR / Hiring
You’re now in the regulatory crosshairs across all three jurisdictions, and the pace has accelerated this week. In the US, New York City’s bias audit requirement is live, Illinois allows direct lawsuits, and Colorado’s deadline is June 2026. The FTC is explicitly watching hiring AI for discrimination. In the UK, the ICO just published a report saying most hiring AI lacks meaningful human involvement and is demanding change by the time its guidance finalizes in Q3 2026. In the EU, hiring tools are high-risk, and your compliance deadline is 2 December 2027. If you build, sell, or use AI for hiring, your 2026 is now allocated to compliance work.
The ICO’s report this week is the most notable. It found that 80% of employers think they’re using decision support (human-in-the-loop), but evidence shows they’re using fully automated decisions with no real human involvement. The ICO is now saying that rubber-stamp review doesn’t count as human involvement. A human must have the ability to actively influence the decision before it’s applied. This is a higher standard than most hiring teams currently meet. If you use AI to screen candidates in the UK, document your human review process now. If it’s a rubber stamp, redesign it.
If you sell hiring tools, assume all jurisdictions are now on a tighter compliance timeline. New York City requires bias audits now. Colorado requires impact assessments by June. The UK expects meaningful human involvement and bias testing by the time guidance finalizes in Q3. The EU has until December 2027, but that is only 16 months away. Start with whatever deadline is closest to you. If you sell to multiple jurisdictions, pick the hardest standard and build for that. You will be compliant everywhere else.
Dates to put in your calendar
11 March 2026 | USA | FTC Policy Statement on AI and Section 5 published. The FTC is now actively enforcing AI practices under existing authority. Review your disclosure practices, data sourcing, and any safety claims you make.
31 March 2026 | UK | ICO publishes report on automated decision-making in recruitment. The UK’s hands-off period on hiring AI is over. If you use or sell hiring tools in the UK, start auditing for meaningful human involvement and bias testing.
31 March 2026 | USA | California Governor signs Executive Order N-5-26 on AI government procurement standards. If you sell to California state agencies, start tracking what procurement standards are being written. Final requirements will be published by Q3 2026.
28 March 2026 | USA | NPR analysis shows states pushing back on Trump administration’s preemption push. Federal-state AI law battle is accelerating. If you operate across state lines, assume state laws are binding for now.
20 March 2026 | USA | White House releases National Policy Framework for AI with broad preemption proposal. The federal-state legal battle is formally on. Watch for litigation updates.
26 March 2026 | EU | European Parliament confirms position on Digital Omnibus. AI Act compliance deadlines are now locked in: 2 December 2027 for high-risk systems, 2 August 2028 for general-purpose AI.
29 May 2026 | UK | Deadline for feedback on ICO consultation on automated recruitment decisions. If you use or sell hiring tools in the UK, respond to the consultation to shape final guidance.
26 May 2026 | UK | Consultation deadline for “Growing up in the online world,” including AI chatbot and generative AI obligations. Submit feedback if your product affects children.
2 August 2026 | EU | Each EU member state must establish at least one AI regulatory sandbox. Sandboxes provide liability protection if you follow sandbox guidelines in good faith while testing AI systems. This is your window to enter a sandbox if your system qualifies.
August 2026 | EU | EU AI Office enforcement powers activate. The office can now issue fines up to 3 percent of global annual turnover or 15 million euros.
28 April 2026 | EU | Next Digital Omnibus trilogue session. Watch for movement on copyright protections, user liability, and AI-generated content marking timelines. Final agreement expected by May 2026.
June 2026 | USA | Colorado AI Act impact assessment requirement deadline for high-risk systems. If you operate in Colorado and use AI for hiring, lending, or consequential decisions, this is your next hard deadline.
2 December 2027 | EU | Compliance deadline for high-risk AI systems (Annex III) under the AI Act. If you sell hiring, credit, insurance, biometrics, or consequential decision-making tools to the EU, this is your deadline.
Sources:
California Governor Signs Executive Order N-5-26 on AI Government Procurement Standards
Trump wants a deadlocked Congress to move on AI. Frustrated states say they already have
UK ICO Publishes Guidance on AI and Automated Recruitment Decisions
AI Regulatory Sandboxes: State of Play and Implementation Challenges
Parliamentary Think Tank: Parliament’s emerging position on the Digital Omnibus
