top of page

The Surveillance Problem: Why Candidates Don't Trust AI Hiring

  • trevor85569
  • 1 day ago
  • 3 min read

Trevor Higgs | April 2026

Somewhere around 2019, the assessment industry crossed a line. Not a legal line , though several companies came close, but a human one. The line between measuring candidates and monitoring them.

Facial micro-expression analysis. Vocal tone scoring. Eye-movement tracking. Keystroke dynamics. Mouse-behavior patterns. Each one pitched as "new." Each one experienced by candidates as invasive.

The result? A trust collapse that the industry is only beginning to reckon with.

The Surveillance Stack

Let's be specific about what candidates have been subjected to in the name of "assessment."

Video interview platforms analyzed facial expressions during responses, claiming to detect traits like conscientiousness and emotional stability from micro-movements the human eye can't even see. Voice analysis tools scored candidates on tone, pace, cadence, and emotional markers, sometimes without clear disclosure. Game-based assessments tracked mouse movements, click patterns, and response timing to infer cognitive and personality traits.

Each of these tools came with impressive-sounding validation studies , often conducted by the vendors themselves, rarely published in peer-reviewed journals, and almost never replicated independently.

But validation wasn't the real problem. Experience was.

Candidates described these assessments as "creepy," "dehumanizing," "stressful," and "like being watched." They didn't need to understand the methodology to know something felt wrong.

And they were right to feel that way. When a major vendor halted its facial analysis product after an FTC complaint, it validated what candidates had been saying for years: this technology was never about them. It was about processing them.

The Predictive Validity Question

Put aside the candidate experience argument for a moment. Do these surveillance-style assessments even work?

The Schmidt, Oh, and Shaffer (2016) meta-analysis is the most comprehensive study of selection method validity ever conducted. It evaluated 31 different approaches across 100 years of research [1]. The results are clear.

General mental ability: r = .65.

Structured interviews: r = .58.

Integrity testing: r = .46.

Know what's not in the top tier? Facial analysis. Vocal scoring. Mouse tracking. These methods either haven't been studied at sufficient scale or have been studied and found wanting.

The most predictive thing you can measure about a candidate , their cognitive potential, doesn't require a camera. It doesn't require voice analysis. It doesn't require monitoring their mouse movements.

It requires a validated cognitive assessment. That's it.

Why 8% Trust

The single-digit trust figure makes perfect sense when you understand what candidates have experienced.

They've been asked to record themselves answering questions to a camera, with no human interaction, knowing their facial expressions would be analyzed by software. They've completed gamified assessments without understanding what was being measured or how it would be scored. They've been rejected by algorithms without any explanation of why.

And when they complained , on Glassdoor, on Reddit, on TikTok, the industry's response was to tout "AI-powered efficiency" even louder.

The 8% isn't a communications failure. It's a design failure. These tools were designed for employer convenience, not candidate trust.

The Path Back to Trust

Rebuilding trust after a surveillance era requires more than removing cameras. It requires rethinking the entire candidate experience from the ground up.

  • Step one: measure what matters. A century of I/O psychology research tells us that cognitive ability is the strongest single predictor of job performance. You don't need surveillance tech to measure it. You need validated cognitive assessment.

  • Step two: explain everything. Every score, every decision, every recommendation , in plain language that candidates can understand. Not legal disclaimers. Not technical jargon. Real explanations.

  • Step three: keep humans in the loop. AI should recommend. Humans should decide. When humans make the final call, candidates have someone to appeal to, ask questions of, and trust.

Step four: prove it works. Don't hide behind "proprietary methodology." Publish your validation data. Invite independent audits. Show your work.

The bar is low because the industry set it there. The companies that raise it will win the trust , and the talent, that everyone else is losing.

The Competitive Shift

The trust gap is a market-making moment. The companies that recognize it , and rebuild their assessment processes around transparency and validated science, will have a structural advantage in talent acquisition.

Not because they have better technology. Because candidates will choose them.

In a world where 52% of workers are more worried than hopeful about AI [2], the employer who treats candidates with transparency and respect doesn't just close the trust gap. They open a talent gap that competitors can't close.


[1] Schmidt, F. L., Oh, I.-S., & Shaffer, J. A. (2016). The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 100 Years of Research Findings. Working paper. (Original meta-analysis: Schmidt, F. L., & Hunter, J. E. (1998). Psychological Bulletin, 124(2), 262-274.)

[2] Pew Research Center. (February 25, 2025). U.S. Workers Are More Worried Than Hopeful About Future AI Use in the Workplace. Survey of 5,273 U.S. adults conducted October 7-13, 2024.

Comments


bottom of page