State lawmakers seek to regulate employer use of AI for wage decisions

As employers continue finding new ways to use artificial intelligence tools and software to support business operations, state legislators have taken notice. Specifically, lawmakers are increasingly scrutinizing employers’ use of AI and automated decision tools to set or influence employee compensation, with the stated aim of curbing potentially discriminatory impacts resulting from the use of algorithmic wage setting and increasing transparency to employees and applicants regarding the use of such technology.

See also: State AI regulation ban nixed. What it means for employers, tech firms

Recent state AI regulation activity

Several states—including California, Colorado, Illinois and Texas—have recently enacted legislation seeking to place parameters on AI-driven compensation and employment decisions. These include the following:

  • The Colorado Artificial Intelligence Act (“CAIA”)—Effective Feb. 1 (with an enforcement date of June 30, 2026), the CAIA requires employers to exercise “reasonable care” when deploying AI systems in “high-risk” areas—including compensation, promotion, hiring and other employment decisions—to avoid “algorithmic discrimination.” To satisfy this duty of care, employers using AI systems to make employment decisions must implement risk-management policies and programs, conduct annual and other periodic impact assessments to identify potential bias, and notify employees if AI is used to make employment decisions.
  • Illinois’ Amendments to the Illinois Human Rights Act (“IHRA”)—Effective as of Jan. 1, 2026, the IHRA amendment prohibits employers from using AI tools in connection with employment decisions, including those related to wage setting, unless they (1) notify employees when using AI for these purposes and (2) ensure the tools are not used in a manner that results in discrimination based on protected classes.
  • The Texas Responsible Artificial Intelligence Governance Act (“TRAIGA”)—Effective as of Jan. 1, TRAIGA prohibits employers from using AI tools in employment decisions with the intent to discriminate against protected classes. Unlike other states, this law does not allow for liability based solely on disparate impact or unintentional discrimination.
  • California Privacy Protection Agency (“CCPA”) Regulations—Effective Jan. 1, 2027, the updated CCPA regulations restrict employers’ use of automated decision-making technology in employment decisions, including compensation and other terms and conditions of employment. The regulations require employers to conduct risk assessments, provide employees with pre-use notices and allow employees to opt out of automated decision-making processes under certain circumstances.
Robert Dumbacher
Co-author Robert Dumbacher of Hunton Andrews Kurth LLP

Various state lawmakers have continued the trend in 2026, introducing bills containing similar restrictions. For example, California Senate Bill 947, also known as the “No Robo Bosses Act,” was introduced in February and could significantly restrict how employers use artificial intelligence to make employment decisions. If enacted, the bill would prohibit employers from using automated decision-making systems to process worker data as inputs or outputs to inform employee compensation—unless the employer can clearly demonstrate that any differences in compensation for substantially similar or comparable work assignments are based upon cost differentials in performing the task involved, or that the data was directly related to the tasks that the worker was hired to perform. Notably, this is a revised version of California Senate Bill 7, the original “No Robo Bosses Act,” which was vetoed by California Gov. Gavin Newsom on Oct. 13, 2025.

While the various proposed and enacted state laws are not all identical, they share common features. First, they generally define “automated decision systems” to include systems, software or processes—including those which rely on machine learning or AI techniques—that are used to assist or replace human decision-making. In the employment context, these definitions encompass automated human resources tools and software systems that use predefined rules to process data through algorithms and assist with the performance of human resources functions. These tools could include everything from basic rule-based systems to sophisticated technologies powered by generative AI.

Additionally, the proposed and enacted state laws provide guidance for conduct that would not constitute unlawful use of algorithmic wage setting. These exclusions include, for example, when employers (1) offer individualized wages based on data related to services workers perform; (2) disclose in plain language their use of automated decision systems, including the data considered by the systems and how the systems consider such data, to employees and applicants whose compensation is influenced or determined by these methods; and (3) develop and implement procedures to ensure the accuracy of the data considered by automated decision systems in setting wages.

Legal risks associated with AI-driven compensation decisions

The lawmakers advocating for these proposed state laws have emphasized that the unregulated use of AI by employers in compensation decisions may result in discriminatory compensation results. Indeed, employers’ AI-driven compensation decisions may be covered by and actionable under a variety of employment laws, including Title VII of the Civil Rights Act, the Americans with Disabilities Act, the Age Discrimination in Employment Act, the Equal Pay Act and/or applicable state and local laws.

Co-author Keenan Judge of Hunton Andrews Kurth LLP

The nature of automated decision systems already creates unique legal risk for employers, particularly in the context of relying on these systems to render employee compensation determinations. A key challenge employers face when using AI-driven tools in general is the lack of transparency in how the tools arrive at their conclusions or recommendations. While human decision-makers can explain the reasoning motivating compensation decisions, it is difficult—and in some instances may be impossible—to discern the reasoning underlying decisions made by certain AI tools. This leaves employers vulnerable to legal challenges regarding compensation decisions rendered by AI tools or software. The scope of potential liability may be amplified if the processes are used to set or influence the compensation of a large number of employees or applicants.

Takeaways for employers

For now, employers should ensure compliance with applicable federal and state laws that have been enacted or are scheduled to take effect in the near future. This includes, at a minimum, identifying each AI tool currently used in employment decision-making and assessing whether those tools are subject to regulation by any state or local laws. Employers also should establish and implement a comprehensive AI policy that outlines internal procedures for using AI, provides required notice to employees and applicants about AI use and mandates human oversight of AI-driven recommendations.

Looking forward, employers should actively monitor developments in federal, state and local legislation and agency regulations aimed at governing the use of AI in decisions related to employee compensation and other employment terms. As states move rapidly to establish boundaries for AI’s role in workplace decision-making, employers that proactively audit their AI-related practices and prioritize transparent human involvement in decision-making processes, including compensation decisions, will be better positioned to minimize legal risks and adapt to evolving regulatory requirements.

The post State lawmakers seek to regulate employer use of AI for wage decisions appeared first on HR Executive.

📰 Original Source

This article was originally published on HR Executive. Click below to read the complete article.

Read Full Article on HR Executive →