top of page

What NYC Local Law 144 Means for Your Hiring Process

  • Writer: Trevor Higgs
    Trevor Higgs
  • Mar 20
  • 4 min read

Trevor Higgs | March 2026

If your company uses any form of automated tool to screen, evaluate, or rank job candidates, New York City has a message: you're now subject to one of the most specific AI hiring regulations in the world.

NYC Local Law 144, which went into effect in July 2023, established the first major regulatory framework specifically governing the use of automated employment decision tools in hiring and promotion. While the law technically applies only to employers and employment agencies operating within New York City, its impact extends far beyond Manhattan.

Every HR leader in the country should be paying attention. Here's why.

What the Law Actually Requires

Local Law 144 is built on four core requirements, each with practical implications for how you source, screen, and hire talent.

The first is the annual bias audit. Any employer using an automated employment decision tool must commission an independent bias audit at least once per year. This isn't an internal review or a vendor-provided certification. It's a formal, third-party assessment that examines the tool's impact across protected categories including race, ethnicity, and sex. The audit must analyze selection rates and scoring patterns to identify statistically significant disparities.

The second requirement is public disclosure. The results of the bias audit must be publicly posted on the employer's website. Not buried in a compliance document. Not available upon request. Publicly posted, where candidates and regulators can find them. This is a transparency requirement that many employers have found uncomfortable, but it's not optional.

Third, candidate notification. Employers must notify candidates at least 10 business days before an automated employment decision tool is used in their evaluation. The notification must explain what the tool is, what data it collects, and what job qualifications or characteristics it assesses. This isn't a footnote in the application terms and conditions. It's a standalone, clear disclosure.

Fourth, the alternative process requirement. Candidates in New York City who prefer not to be evaluated by an automated tool must be offered an alternative process. This means your hiring workflow needs a human-only track for candidates who opt out.

Why Non-NYC Companies Should Care

Local Law 144 matters beyond New York City for a straightforward reason: it's the regulatory template.

When legislators in other jurisdictions draft AI hiring regulations, they look at what's already been enacted, tested, and enforced. LL144 is that reference point. The Colorado AI Act [2], effective June 2026, mirrors many of the same principles: transparency, disclosure, impact assessment applied more broadly to high-risk AI systems.

The EU AI Act [1] takes LL144's core concepts and applies them at continental scale, with significantly larger penalties and broader scope.

If you're a national employer hiring across multiple states or a company with any international hiring activity, the odds that you can avoid these requirements are approaching zero. The question isn't whether your jurisdiction will adopt similar rules. It's when.

The Practical Impact on Your Hiring Process

For most companies, achieving LL144 compliance requires changes in three areas.

Vendor management is the first. If you're using a third-party screening, assessment, or ranking tool, you need to verify that it can support an independent bias audit. Ask your vendor directly: can you provide the data necessary for an independent third-party audit of selection rates across race, ethnicity, and sex? If the answer is "we handle compliance ourselves" or "our tool is already fair," that's not sufficient. You need audit-ready data, not vendor assurances.

Candidate communication is the second area. Your application and hiring workflows need to include clear, timely notifications about automated decision tool usage. This means updating your application portal, your email communications, and your recruiter scripts to include the required disclosures at least 10 business days before the tool is used.

Process design is the third. The alternative process requirement means you need a documented human-evaluation track that can handle candidates who opt out of automated assessment. This doesn't need complexity, but it does need to exist, be documented, and be genuinely available.

The Enforcement Reality

LL144 enforcement is handled by the New York City Department of Consumer and Worker Protection, which has the authority to issue penalties of $500 for the first violation and $500 to $1,500 for each subsequent violation, with each day of non-compliance treated as a separate violation [4]. While these penalties may seem modest compared to EU AI Act fines, they add up quickly for ongoing non-compliance. The reputational damage of a public enforcement action can be significant.

More importantly, LL144 non-compliance can be used as evidence in private litigation. If a candidate files a discrimination claim and your company was using a non-audited automated tool without proper notification, that non-compliance becomes exhibit A.

Getting Compliant

Compliance with LL144 requires discipline and intentionality, not massive technology investments. Start by inventorying every automated tool used in your hiring process: not just the ones labeled "AI," but any tool that screens, scores, ranks, or filters candidates algorithmically. Then establish a relationship with an independent auditor qualified to conduct bias assessments. Update your candidate notification processes. Document your alternative evaluation track. Put it on the calendar: bias audits are annual, not one-time.

The companies that treat LL144 as a compliance floor rather than a ceiling, using it as the foundation for a broader AI governance framework, will be the ones best positioned as additional regulations take effect.

 

 

 

Endnotes

[1] European Union. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (AI Act). Official Journal of the European Union. Employment provisions classified as high-risk, effective August 2026.

[2] State of Colorado. (2024). SB 24-205, Concerning Consumer Protections for Artificial Intelligence. Effective February 1, 2026.

[3] City of New York. (2021). Local Law 144 of 2021, Automated Employment Decision Tools. Effective July 5, 2023.

[4] City of New York, Department of Consumer and Worker Protection. (2023). Rules on Automated Employment Decision Tools (Local Law 144). Penalties: $500 for first violation, $500-$1,500 for subsequent violations per §20-872.

Comments


bottom of page