Since Illinois’ Limit Predictive Analytics Use Act took effect, workplace AI risk is no longer a theoretical compliance concern. It’s a live litigation issue. Employers now face a civil right of action tied to discriminatory AI use and failures to disclose.
Illinois isn’t a quirky outlier. It’s one visible node in a fast-emerging national patchwork, and arguably the most consequential. New York and Colorado have made similar legislative moves, together representing tens of millions of workers. What’s taking shape across these three states is impacting how a large slice of the U.S. labor market will experience automated hiring and management tools.
A ‘plaintiff’s blueprint state’ for AI employment law
Littler’s AI practice, which advises employers on deploying AI and defends AI-based employment class actions, has a pointed take on Illinois specifically, calling it “a plaintiff’s blueprint state.”
Britney Torres, co-chair of Littler’s AI & Technology Practice Group, told HR Executive that “courts will look to AI-specific and generally applicable discrimination authority to determine where liability lands for biased employment decisions arising out of AI tools.”
The liability picture gets complicated quickly, particularly when it comes to joint liability, an area HR knows well. Many hiring and employment practices are increasingly done hand-in-hand with a platform or vendor.

Torres points to California as an example. Courts there will need to interpret discrimination precedents like Raines v. U.S. Healthworks Medical Group, which holds that an employer’s business entity “agents” may be considered “employers” and directly liable for employment discrimination under certain circumstances.
“Regardless of how courts interpret authority and ultimately apportion fault, joint and several liability will likely be a key issue for years to come, making it critical for all to document measures that protect against bias,” Torres says.
Waiting on the federal government to streamline legislation isn’t an option. Federal rules on AI in the workplace remain in flux because policymakers and agencies are still wrestling with competing views about how much to lean on existing civil rights and labor laws versus creating new, AI‑specific frameworks, as HR Executive recently reported.
That being said, a federal court is allowing a closely watched class and collective action against Workday’s AI‑driven hiring tools to move forward. This seems to signal that judges are prepared to scrutinize algorithmic screening under existing anti‑discrimination laws
Already signed that vendor contract? You may have exposure
Many HR leaders locked in vendor agreements before any of these state laws existed. Torres confirms that those who relied on vendor representations about validation and bias during contract negotiations could face exposure related to anti-bias assessments.
The stakes vary by state. Failure to conduct an anti-bias assessment could be a factor weighing against the employer’s good faith. More seriously, it could be a direct violation of the law, such as under the Colorado Artificial Intelligence Act, set to take effect June 30, 2026.
If an employer runs an independent audit and discovers potential disparate impact in a tool already in use, the duty of reasonable care kicks in. Torres describes what remediation looks like in practice. “If potential disparate impact is identified in a tool that is already being used, the employer should take immediate action to avoid harm by pausing use of the tool or adding safeguards to the process, such as increased human oversight.”
Torres adds that the investigation should assess the cause of the disparity, identify potential less-discriminatory alternatives and determine whether remediation is necessary. Actions could potentially include model retraining, adjusted criteria or scoring or enhanced human oversight. Any modified tool should be validated before it’s redeployed.
Throughout, documentation is essential. It’s the paper trail that substantiates a good-faith compliance effort, says Torres.
Read more: AMS charter tackles blind spots in HR policy
What about drift?
What if a tool was clean at implementation but became biased over time? Torres acknowledges the complexity. “A claim regarding a tool that was not biased initially but became biased after use would likely be focused against the employer, but could also allege developer liability.”
She adds that clarity may be coming. “More guidance on this topic may soon be available as liability for AI harms is an area of focus this legislative session. Nine bills on the topic are currently pending in six different states.”
The highest-risk tools are the ones used incorrectly
Not all AI-enabled HR tech carries equal risk, but the category as a whole tends to land in sensitive territory. However, Torres describes a reasonable litmus test here. “The highest-risk employment AI tools are those that are improperly used.”
Because assessment, notice and oversight requirements are typically specific to a tool’s intended use case, deploying a tool outside that scope creates real vulnerability. The solution, she argues, is governance. “Employers can minimize the risks of improper use with thoughtful adoption strategies and governance, which not only protect the business but also unlock the AI tool’s capabilities.”
For HR leaders, the window for treating AI tools as vendor-managed, low-oversight technology is closing. The legal infrastructure to hold employers accountable is already in place and growing.
The post New state regs are a ‘blueprint’ for discriminatory AI claims appeared first on HR Executive.
This article was originally published on HR Executive. Click below to read the complete article.