top of page

AI Hiring Vendors Are Facing Their "Big Tobacco" Moment

  • Writer: Jeremy Bargiel
    Jeremy Bargiel
  • Mar 13
  • 5 min read

Trevor Higgs | March 2026

The first major class-action lawsuit against an AI hiring vendor is no longer a hypothetical. It’s already here.

One of the most recognized names in HR technology is defending a class-action lawsuit alleging that its AI-powered screening tools systematically discriminated against applicants based on race, age, and disability. The scale? Over 1.1 billion rejected job applications processed through the platform.[1]

Meanwhile, another leading vendor — the company that pioneered video interview AI — was forced to halt its facial analysis feature after an FTC complaint alleged the technology was deceptive and unfairly harmful to job candidates. And these aren’t isolated incidents. The EEOC has settled cases against companies using AI that screened out applicants over the age of 55.[2][3]

The pattern should look familiar. It’s the same trajectory the tobacco industry followed. For years, tobacco companies argued they didn’t know their products were harmful. When the science proved otherwise, they shifted to “we disclosed the risks.” When that failed, the lawsuits — and the settlements — became an industry-defining reckoning.

AI hiring vendors are walking the same path. “Our algorithms are proprietary” sounds a lot like “our formulas are trade secrets.” “We didn’t intend to discriminate” echoes “we didn’t know it was harmful.” The defense is the same. History suggests the outcome will be, too.


Why This Matters for Every Employer

Here’s what the EEOC has made unambiguously clear: when an AI hiring tool discriminates, the employer is liable. Not just the vendor. Not just the software company. The employer.[4]

This means “we use [Vendor X]” is not a compliance strategy. It’s a liability transfer that doesn’t actually transfer. Your organization is on the hook for the tools you deploy, regardless of who built them.

And the regulatory landscape is only getting more complex. Right now, employers using AI in hiring need to navigate a converging set of requirements from multiple jurisdictions:

NYC Local Law 144 is already in effect, requiring annual bias audits on automated employment decision tools, public posting of results, and candidate notification at least 10 business days before the tool is used. It also mandates that an alternative process be available for candidates who request one.[5]

The EU AI Act takes full effect in August 2026, classifying AI hiring tools as “high-risk” systems subject to mandatory risk assessments, transparency requirements, human oversight mandates, and comprehensive documentation obligations. Penalties reach up to €35 million or 7% of global annual revenue — whichever is higher. And if you’re a US company thinking “that’s a European problem,” consider whether you have EU employees, EU job candidates, or EU customers. If the answer is yes, you’re likely in scope.[6][7][8]

The Colorado AI Act, effective June 2026, introduces transparency and disclosure requirements for high-risk AI systems used in consequential decisions — including employment.[9]

And the EEOC continues to issue guidance reinforcing that existing anti-discrimination laws (Title VII, ADA, ADEA) apply fully to AI-driven hiring decisions. The agency has made clear it will use existing enforcement authority aggressively.[10][11][12]


The 47% Compliance Gap

Here’s the most alarming number in all of this: 78% of organizations now deploy AI in their hiring processes, but only 31% have governance policies in place. That’s a 47-percentage-point gap between adoption and compliance — nearly half of all organizations using AI in hiring with no formal framework to ensure it’s being used legally and ethically.[13][14]

That gap represents a massive unprotected market of organizations that are, in effect, operating without a safety net. They’ve adopted the technology. They haven’t adopted the governance.

And the regulatory deadlines aren’t waiting. The EU AI Act enforcement begins in August 2026. Colorado follows in June 2026. NYC is already active. The window to get compliant without the pressure of an investigation or lawsuit is closing.[15]


What “Audit-Ready” Actually Means

Being audit-ready isn’t about checking a box or adding a compliance paragraph to your vendor contract. It means you can answer these questions today:


  • First, can you explain how your AI makes every hiring decision in plain language that a non-technical regulator could understand? Not in statistical terms. Not in vendor marketing language. In plain, clear language.

  • Second, do you have adverse impact testing documentation that demonstrates your tools don’t disproportionately screen out candidates based on protected characteristics? Not from your vendor’s general documentation — from testing on your specific candidate pool.

  • Third, do you have a documented process for human oversight of AI-driven decisions? Regulators are looking for evidence that humans are meaningfully in the loop, not just rubber-stamping algorithmic outputs.

  • Fourth, could you produce compliance documentation within 48 hours of a regulatory request? Audits don’t give you months to prepare. They give you days.

  • Fifth, do you have a candidate notification process that meets the requirements of every jurisdiction where you hire? Different laws have different notification timelines, opt-out requirements, and alternative process obligations.


The Companies Acting Now Will Be Ready

This isn’t a prediction about what might happen. It’s a description of what’s already happening. Lawsuits are underway. Regulations are in effect. Enforcement actions have been taken. The compliance reckoning in AI hiring has arrived.

The companies that recognize this moment for what it is — and invest in audit-ready, transparent, science-backed hiring practices today — will be the ones that can answer the board’s questions, survive the audit, and keep hiring confidently while their competitors scramble.

The companies that wait will find themselves in the same position as every industry that was caught unprepared: scrambling to retrofit compliance onto tools that weren’t built for it, and defending practices they can’t explain.

The time to act is now. Not Q3. Not “after we evaluate options.”

Its Now.




[1]Workday class-action lawsuit filing, 2024. See Derek v. Workday, Inc., No. 3:23-cv-00770 (N.D. Cal.).

[2]Federal Trade Commission enforcement action. See FTC.gov for relevant consent orders.

[3]U.S. Equal Employment Opportunity Commission, “Select Issues: Assessing Adverse Impact in Software, Algorithms, and AI,” May 2023.

[4]U.S. Equal Employment Opportunity Commission, “Select Issues: Assessing Adverse Impact in Software, Algorithms, and AI,” May 2023.

[5]NYC Local Law 144 of 2021, codified at NYC Administrative Code §20-870 et seq.

[6]EU Artificial Intelligence Act, Article 99, Penalty Provisions (2024); Regulation (EU) 2024/1689.

[7]EU Artificial Intelligence Act, Article 99, Penalty Provisions (2024).

[8]Regulation (EU) 2024/1689, the Artificial Intelligence Act, adopted June 2024.

[9]Colorado SB 24-205, concerning consumer protections for AI, effective June 2026.

[10]U.S. Equal Employment Opportunity Commission, “Select Issues: Assessing Adverse Impact in Software, Algorithms, and AI,” May 2023.

[11]Title VII of the Civil Rights Act of 1964, 42 U.S.C. §§ 2000e et seq.

[12]U.S. Equal Employment Opportunity Commission, “Select Issues: Assessing Adverse Impact in Software, Algorithms, and AI,” May 2023.

[13]SHRM, “AI in HR: State of Adoption Report,” 2025.

[14]Gartner, “AI Governance in HR Technology Survey,” 2025.

 

 

Comments


bottom of page