The EEOC, the EU, and Your AI: A Compliance Roadmap
- Trevor Higgs

- 2 days ago
- 5 min read

Trevor Higgs | April 2026
In August 2026, the European Union's Artificial Intelligence Act will reach full enforcement, and if you're a US company that thinks this is purely a European concern, you may want to reconsider.
The EU AI Act [1] is the most comprehensive AI regulation in the world, and its treatment of hiring technology is among the most stringent provisions in the entire framework. AI systems used in recruitment, screening, evaluation, or decision-making about workers are classified as "high-risk", a designation that triggers mandatory compliance requirements with penalties that can reach €35 million or 7% of global annual revenue [2].
For any US company that employs people in the EU, accepts job applications from EU residents, or processes candidate data originating in EU member states, this regulation is not optional and not avoidable. Here's what you need to understand and what you need to do before August 2026.
What the EU AI Act Requires for "High-Risk" AI in Hiring
The Act establishes a detailed set of obligations for any organization deploying high-risk AI systems, and hiring AI falls squarely in this category. The requirements cover the full lifecycle of the AI system, from development through deployment to ongoing monitoring.
Risk management is the foundation. Organizations must implement and maintain a risk management system that identifies, analyzes, and mitigates risks associated with their AI hiring tools. This isn't a one-time assessment. It's a continuous process that must be documented and updated throughout the system's lifecycle. The risk management framework must consider risks to fundamental rights, including the right to non-discrimination and the right to data protection.
Data governance requirements mandate that training, validation, and testing datasets used by AI hiring tools meet specific quality standards. Datasets must be relevant, representative, and as free from errors as possible. Organizations must be able to demonstrate that the data underlying their AI systems doesn't encode historical biases or systematically disadvantage protected groups.
Technical documentation must be comprehensive enough for authorities to assess the AI system's compliance. This includes documentation of the system's intended purpose, how it was developed and tested, what data it uses, how it makes decisions, and what its known limitations are. The documentation must be maintained and updated.
Transparency requirements mean that deployers of high-risk AI in hiring must inform individuals that they are subject to AI-driven evaluation. This goes beyond simply mentioning AI in the application terms. Candidates must understand that AI is being used, what it evaluates, and how its outputs factor into hiring decisions.
Human oversight is not a suggestion, it's a mandate. The Act requires that high-risk AI systems be designed and deployed so that they can be effectively overseen by humans. This means qualified people must be able to understand the AI's outputs, recognize potential errors, and override or reverse AI-driven decisions when necessary. Rubber-stamping algorithmic recommendations does not constitute meaningful oversight.
Accuracy, robustness, and cybersecurity standards must be maintained throughout deployment. The AI system must perform consistently, be resilient against errors and attempts at manipulation, and be protected against unauthorized access or data breaches.
Why US Companies Are in Scope
The EU AI Act uses an effects-based jurisdictional approach similar to GDPR. You don't need to have a legal entity in the EU to be covered. The Act applies to any organization that places an AI system on the EU market or puts an AI system into service in the EU, including AI systems whose output is used in the EU.
In practical terms, this means US companies are covered if they do any of the following: hire or evaluate candidates who are EU residents; employ workers in EU member states and use AI in performance evaluation, promotion, or termination decisions; process candidate or employee data originating from the EU through AI systems; or offer AI-powered hiring tools or services to clients operating in the EU.
For mid-market companies with even modest international operations — a European sales office, remote employees in EU countries, or job postings that attract EU applicants — the Act is almost certainly applicable.
The Penalty Structure
The EU AI Act's penalty framework is designed to make non-compliance materially painful. For violations of the high-risk AI requirements that apply to hiring, penalties can reach €35 million or 7% of worldwide annual turnover, whichever is higher. For providing incorrect or misleading information to authorities, the penalties reach €7.5 million or 1% of turnover.
These are not theoretical maximums. The EU has demonstrated through GDPR enforcement that it is willing to impose significant fines on non-EU companies. Meta was fined €1.2 billion for GDPR violations. Amazon received a €746 million penalty. The enforcement infrastructure exists and has been tested at scale.
The Preparation Timeline
August 2026 is approximately five months away. That sounds close because it is. Here's what a realistic preparation timeline looks like:
Months 1-2 (now): Conduct an inventory of every AI system used in hiring, screening, evaluation, promotion, and workforce management. Map which systems are likely classified as high-risk under the Act. Identify which candidate and employee populations fall within the Act's scope.
Month 2-3: Engage legal counsel with EU AI Act expertise to conduct a gap assessment between your current practices and the Act's requirements. This should produce a prioritized remediation plan.
Months 3-4: Begin implementing required changes — risk management frameworks, documentation, transparency processes, human oversight protocols, and data governance improvements. Engage with your AI vendors to ensure they can provide the technical documentation and system access necessary for compliance.
Month 4-5: Test your compliance framework. Conduct internal audit simulations. Verify that your documentation, notification processes, and oversight mechanisms are functioning as designed. Address gaps identified during testing.
Month 5 (August 2026): Ongoing monitoring, documentation maintenance, and continuous compliance. The Act requires ongoing compliance, not a one-time certification.
What This Means for Your AI Vendor Relationships
The EU AI Act places obligations on both providers (developers) and deployers (users) of AI systems. As a deployer, you're responsible for how the AI system is used, ensuring human oversight is maintained, monitoring the system's performance, and reporting serious incidents.
This means your relationship with your AI vendors needs to include specific provisions for: access to technical documentation sufficient for regulatory compliance; cooperation in risk assessments and bias audits; transparency about system updates, changes, and limitations; and data access sufficient for independent monitoring and compliance verification.
If your current vendor cannot or will not provide these things, that's a compliance gap that becomes a liability gap in August 2026.
The Advantage of Acting Now
Compliance preparation is always less expensive and less significant than retroactive remediation under regulatory pressure. Companies that begin preparing now have the advantage of time — time to evaluate vendors, implement processes, train teams, and test frameworks without the urgency of an enforcement deadline or, worse, an investigation.
More importantly, companies that build AI governance into their hiring practices now are building a competitive advantage. As regulation spreads — and it will — organizations with established compliance frameworks will be able to move faster, hire with more confidence, and operate with less risk than competitors who are still scrambling to retrofit compliance onto tools that weren't designed for it.
The EU AI Act isn't just a regulatory burden. It's a signal that the era of ungoverned AI in hiring is ending. The companies that read that signal early and act on it will be the ones that thrive.
[1] European Union. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (AI Act). Official Journal of the European Union. Employment provisions classified as high-risk, effective August 2026.
[2] European Union. (2024). Regulation (EU) 2024/1689 (AI Act), Article 99. Maximum penalties for non-compliance.



Comments