AI adoption across human resources departments has been swift, and is increasingly used across the entire employment lifecycle, including promotions, performance management and compensation. As companies compete for talent, AI is prevalent in hiring, with around 87% of companies reportedly using AI for their recruitment process. But while the adoption of AI was swift, the litigation and regulatory risks are only now starting to emerge.
The legal risks facing HR departments adopting AI have thus far primarily centered around discrimination—where the AI tools allegedly disadvantage job applicants based on their race, gender, age or disability, in violation of civil rights and anti-discrimination laws. As previously reported by HR Executive, new legal risks and theories of liability are emerging, as illustrated by the recently filed complaint against Eightfold AI Inc.
This article discusses the common legal risks facing HR departments’ use of AI, emerging legal risks surrounding AI and what employers can do to mitigate these risks.
AI bias and disparate impact
The most common legal risk facing HR departments deploying AI has been lawsuits alleging that a company’s use of an AI tool unlawfully disadvantaged job applicants based on their race, gender, age or disability, in violation of civil rights and anti-discrimination laws. These lawsuits apply well‑established disparate impact principles, which assess the legality of facially neutral practices based on their adverse effects on protected groups rather than intent, to new hiring technologies.
For example, in Mobley v. Workday, Inc., U.S. District Court for the Northern District of California, Case No. 3:23-cv-00770, filed Feb. 21, 2023, the plaintiff alleged that he applied to approximately 80-100 positions but was rejected each time by an AI-driven screening tool used by employers and developed by Workday. The class action complaint, filed in a California federal court in 2023, asserts that the tool disproportionately excluded applicants based on age, race and disability. Although the case remains pending, the court has twice denied Workday’s motions to dismiss and has conditionally certified the age discrimination claims.
Similarly, in Harper v. Sirius XM Radio, LLC, U.S. District Court for the Eastern District of Michigan, Case No. 2:25-cv-12403, filed Aug. 4, 2025, the plaintiff alleged race-based employment discrimination against the prospective employer, rather than the AI tool developer. The complaint asserts that Sirius XM uses an AI-driven hiring tool that disproportionately excludes African American applicants by relying on candidate matching, shortlisting and sourcing criteria correlated with race, such as educational background, employment history and zip codes.
Many AI-related bias and disparate impact lawsuits have survived the pleading stage and are proceeding into discovery. These lawsuits present high-exposure risks to employers given the significant monetary and injunctive relief typically awarded in employment discrimination cases.
Emerging FCRA and California consumer protection risks in AI screening tools
In January 2026, job applicants filed a first-of-its-kind lawsuit against Eightfold AI Inc. (“Eightfold”)—a business-to-business company specializing in workforce development. In their complaint, the plaintiffs allege that Eightfold’s AI tools unlawfully collected personal data about job applicants from applicants’ social media profiles, internet browsing activities and other personal data, to create reports or “dossiers” about job applicants and rank their “likelihood of success” on the job. Eightfold then allegedly sells these reports to prospective employers without giving applicants an opportunity to review them or correct any inaccuracies. Based on this conduct, the plaintiffs alleged that the plaintiffs violated the Fair Credit Reporting Act (“FCRA”), 15 U.S.C. § 1681, et seq., as well as the California Unfair Competition Law (“UCL”) and the California Investigative Consumer Reporting Agencies Act (“ICRAA”).
Broadly speaking, the FCRA’s main requirements are threefold. First, the FCRA requires companies that qualify as a “consumer reporting agency” (“CRA”) to obtain written consent from an individual prior to providing information about that person to an employer (or prospective employer). Second, CRAs must maintain reasonable procedures to ensure maximum possible accuracy, confidentiality and proper usage. And third, CRAs must allow consumers to access their files and are required to investigate disputed information within 30 days.
According to the plaintiffs, Eightfold’s job applicant reports were “consumer reports” under the FCRA, and therefore Eightfold was required—but failed—to comply with the FCRA’s requirements. Specifically, plaintiffs allege that Eightfold violated the FCRA in two primary ways:
- by failing to obtain the employer’s or other user’s certification of compliance with the FCRA’s disclosure, authorization, notification and dispute requirements prior to providing consumer reports about plaintiffs for employment purposes; and
- by failing to take reasonable steps to ensure that such reports only contained permissible information and were only used by permitted users for permissible purposes in accordance with FCRA’s requirements.
Plaintiffs also bring similar claims under California’s FCRA counterpart, the ICRAA, which establishes stricter disclosure requirements for employers.
One additional wrinkle to watch: The plaintiffs also brought a claim under California’s Unfair Competition Law, which generally protects California consumers from unlawful, unfair or fraudulent business practices, such as practices that violate anti-discrimination laws. Although plaintiffs’ unfair competition claim may be partially (or fully) preempted by the FCRA, it will be interesting to see if the court makes a somewhat novel holding that Eightfold’s practices violated California’s Unfair Competition Law.
How employees can mitigate legal risks
While Eightfold’s AI tool is new, the legal theories advanced against Eightfold are relatively well-settled. Notably, the Consumer Financial Protection Bureau (“CFPB”) issued guidance in November 2024 explaining that employers are required to adhere to the FCRA when making employment decisions utilizing background dossiers, algorithmic scores and other third-party consumer reports.
To mitigate risks from potential AI-related lawsuits, employers and HR departments should take certain mitigation measures, including:
- Develop clear company policies and guidelines regarding use of AI tools in the workplace, especially with HR-related functions such as recruitment, promotions, benefits, disciplinary actions, etc.
- Before contracting with an AI vendor, request more information from the vendor to better understand the legal implications of the vendor’s AI tools. Companies may want to create a list of standard questions posed to an AI vendor.
- When contracting with an AI vendor, HR and legal departments should consider the allocation of responsibilities and risks in key employment compliance areas, including:
-
- compliance with federal, state and local anti‑discrimination and employment laws;
- ADA and accessibility compliance in the screening process;
- data sources and accuracy of data;
- bias testing, impact assessments and audit rights (including access to information needed to evaluate adverse impact);
- data security and retention obligations;
- compliance with applicable laws regarding required notice and applicant consent; and
- allocation of liability and indemnification for claims arising from the AI tool or application.
- Before deploying an AI tool, ensure that all necessary disclosures are made to those affected by the AI tool (e.g., job applicants), and obtain any legally required consent. In general, the more transparent a company is, the more insulated the company is from legal risks.
- Once an AI tool has been deployed, maintain adequate human oversight of any AI tools and document such oversight. In the HR context, human oversight should look for indicia of bias, disparate impact, inaccuracy of information or lack of transparency.
- Review your company’s compliance with the FCRA and consider whether your company’s use of AI technology in the recruitment process triggers FCRA disclosures.
- Stay abreast of the relevant federal and state laws—this is a rapidly evolving area of law.
The post As Eightfold, Workday suits show, AI legal risks are building for HR appeared first on HR Executive.
This article was originally published on HR Executive. Click below to read the complete article.