AI in Hiring and CV Screening
Week of April 21, 2026 | USA · UK · EU
The Use Case
More than 70% of large employers now use some form of AI to filter, rank or score job applicants before a human sees anything, CV screening tools, automated video interview scorers, skills assessors, chatbot pre-qualification flows. If your product is in this space, or you use AI to hire your own people, all three major jurisdictions now have specific rules that apply to you. The EU’s are about to become enforceable.
🇺🇸 USA
No single federal AI hiring law, but existing discrimination law bites hard. Under Title VII of the Civil Rights Act, the Age Discrimination in Employment Act and the Americans with Disabilities Act, you’re liable for disparate impact from AI tools even if you didn’t build them and didn’t know they discriminated. “We bought it from a vendor” is not a defence.
The EEOC removed its original 2023 AI guidance in January 2025 following the change of administration, then issued new guidance in September 2025. Current position: employers must be able to justify algorithmic decisions affecting protected groups and should conduct regular bias testing. Less prescriptive than the old guidance, but the underlying liability under discrimination law is unchanged.
If you operate in New York City, NYC Local Law 144 adds harder requirements: an annual independent bias audit of any automated employment decision tool, public disclosure of the results, and written notice to candidates at least 10 business days before the tool is used on them. Candidates can opt out. The NYC State Comptroller found in December 2025 that enforcement had been weak, but that’s shifting.
Illinois, Colorado and several other states have their own layers. The US is a patchwork. If you’re multi-state, map each one.
What’s clear: disparate impact liability, in full, applies to AI tools regardless of who built them.
What’s ambiguous: how deep bias testing needs to go at the federal level. The September 2025 EEOC guidance is directional, not prescriptive.
🇬🇧 UK
The ICO published its Recruitment Rewired report on 31 March 2026 — the most detailed statement yet of what it expects from employers using automated tools in hiring. Its headline finding: most employers using AI in recruitment don’t realise they’re making automated decisions. They believe they’re getting “decision support.” The ICO’s view is that the practical reality of how the tool is used counts, not how you label it.
Three requirements the ICO treats as non-negotiable: tell candidates an automated tool was used and what it did; ensure any human review is genuinely capable of changing the outcome rather than rubber-stamping it; and test regularly for fairness and bias across demographic groups.
The Data (Use and Access) Act, in force since February 2026, widened the lawful bases available for automated decision-making in recruitment. Legitimate interests can now apply where previously only consent or contractual necessity were available.
Consultation on the ICO’s updated guidance closes 29 May 2026. Final guidance will likely tighten what “meaningful human involvement” actually requires.
What’s clear: transparency to candidates, real human oversight and bias monitoring are expected now, not August. What’s ambiguous: how detailed candidate notifications need to be, and precisely what makes human review meaningful rather than nominal.
🇪🇺 EU
AI hiring tools are explicitly listed as high-risk systems in Annex III of the EU AI Act. The full suite of high-risk obligations becomes enforceable on 2 August 2026.
Deployers (companies using someone else’s AI tool to hire) must: inform candidates that AI was used, give them the right to a human explanation of any decision, keep usage logs and implement documented human oversight.
Providers (companies building or selling AI hiring tools) must: produce technical documentation, conduct a conformity assessment, register in the EU database and design the system for human oversight with documented accuracy and bias testing.
GDPR Article 22 runs alongside all of this: candidates retain the right not to be subject to solely automated decisions with significant effects unless they’ve consented, it’s contractually necessary, or law authorises it.
Fines for failing high-risk system obligations: up to €15 million or 3% of global annual turnover, whichever is higher.
What’s clear: Annex III applies. 2 August 2026 is the hard deadline. What’s ambiguous: whether using a general-purpose AI model for CV screening makes you a provider (with full provider obligations) or only a deployer.
The Practical Minimum
Across all three jurisdictions, the floor looks like this:
Tell candidates AI is being used in their assessment, what it does and what it decides. Keep a human in the loop who can genuinely change the outcome, document how and when they do it. Test your tools for demographic bias at least annually and keep records. Have a clear data retention policy for candidate data.
If you sell into the EU or hire EU-based people, register any high-risk systems before 2 August 2026. If you operate in NYC, commission an annual independent bias audit and publish the results.
The Grey Zone
Three things that lack precise enough definition to act on with full confidence.
First, what “meaningful human involvement” actually requires, both the ICO and EU AI Act demand it but neither specifies the minimum. A human who reviews an AI-ranked shortlist and always approves it is probably not enough. But how much deviation is required to prove genuine oversight? Unclear.
Second, whether using a general-purpose LLM for CV screening makes you an EU AI Act provider with full documentation and conformity assessment obligations, or merely a deployer. The EU AI Office has not issued definitive guidance on this yet.
Third, at what point a tool that “assists” a hiring decision becomes one that effectively “makes” it. The ICO’s answer in Recruitment Rewired is to look at what happens in practice, not what you call it, useful, but not a bright line.
The information is intended to be helpful but is in no way a substitute for seeking professional advice for your specific situation or intent. This applies to business, financial, legal, or other matters discussed herein. Please read the full DISCLAIMER
