The $365,000 Mistake: What Happens When You Use AI for Hiring Without a Policy

In March 2023, the EEOC settled its first-ever AI discrimination case. The company wasn’t a Fortune 500. It wasn’t using sophisticated custom AI. It was using off-the-shelf software that automatically rejected job applicants over a certain age. Nobody knew it was doing that. They paid $365,000 to find out.

The Case That Changed the AI Hiring Conversation

iTutorGroup, an online tutoring company, had integrated automated software into their hiring process. The tool screened incoming applications and rejected candidates based on age thresholds: women over 55 and men over 60 were automatically filtered out.

Over 200 qualified applicants were denied the opportunity to apply. The company had no idea this was happening, because they had no policy, no audit, and no oversight of what the AI was actually doing.

 

$365,000

EEOC settlement · iTutorGroup · First-ever AI hiring discrimination case (2023)

The settlement required iTutorGroup to pay restitution to affected applicants, completely reform their AI hiring process, and submit to EEOC monitoring for an extended period. The reputational and operational cost far exceeded the dollar figure.

Here is the part that matters most for your business: the EEOC didn’t sue the software vendor. They sued the employer.

Why You Are Responsible for What Your AI Does

The legal framework around AI in employment is clear, even if business owners don’t realize it. Under Title VII, the Age Discrimination in Employment Act, and the Americans with Disabilities Act, employers are responsible for the outcomes of their hiring processes, regardless of whether those processes are run by humans or algorithms.

The EEOC’s 2023 joint statement with the FTC, DOJ, and CFPB formalized this: existing civil rights laws apply fully to AI-assisted decisions. There is no carve out for technology. The defense ‘our software vendor did this, not us’ has been explicitly rejected by courts.

The Workday precedent: still active in 2025 In a 2024 case that survived a motion to dismiss in 2025, a federal court held that Workday ( an AI vendor ) could potentially be liable as an 'agent of the employer' under civil rights law. If that precedent holds, it means both the employer AND the vendor share exposure. That's actually worse for businesses using AI tools, because it creates dual liability without double protection.

The Five AI Hiring Risks Every Employer Faces Right Now

Risk Area

What Goes Wrong

Law Triggered

Age screening

AI automatically filters candidates by age, even implicitly via proxies like graduation year

ADEA

Resume ranking bias

AI trained on biased historical data replicates and amplifies past discrimination patterns

Title VII

Video interview scoring

AI scores facial expressions, vocal tone, or word choice, no accessibility for deaf applicants or those with speech differences

ADA + IL HB 3773

No adverse action notice

AI rejects applicant with no human review and no explanation. FCRA and ECOA both require explainable adverse actions

FCRA / ECOA

No disclosure to applicants

Illinois and NYC require employers to notify applicants when AI is used in hiring. No disclosure = per-violation fines

NYC LL144 / IL

What a Compliant AI Hiring Process Looks Like

This isn’t about avoiding AI in hiring,  it’s about using it responsibly. Compliant employers do four things consistently:

  • Human review checkpoint: A qualified human reviews every AI-assisted hiring decision before it’s final. The AI narrows the field; a person makes the call.
  • Bias audit documentation: The AI tool is evaluated. At least annually, for disparate impact across protected classes. You need documentation that the audit happened.
  • Applicant disclosure: If you operate in Illinois or New York City, applicants must be notified that AI is used. This is now legally required, not optional.

Vendor contract protections: Your contract with the AI hiring vendor should require them to indemnify you for discrimination claims and provide bias audit reports. Most standard contracts don’t include this and you have to add it.

The Bottom Line

The iTutorGroup case was a small company using commodity hiring software. They had no idea what the algorithm was doing. That is the rule, not the exception, and the legal exposure is the same whether you know or not. An AI Policy Audit identifies every AI tool touching your hiring process and maps exactly what your compliance obligations are.

Protect Your Hiring Process

Free 30-minute AI risk assessment: no commitment required
Get Your Free Assessment: theprocollective.com/ai-policy-shield

Insights & Success Stories

Related Industry Trends & Real Results