Why abandoning disparate impact protections is a disaster in the age of AI

As artificial intelligence rapidly reshapes hiring, pay decisions and performance evaluations, the federal agency responsible for protecting workers from discrimination is stepping back instead of stepping up.

The Equal Employment Opportunity Commission (EEOC) recently abandoned its longstanding policy of investigating disparate impact claims, which address policies that appear neutral but create unnecessary barriers that disproportionately harm certain groups.

This removes one of the only tools workers have to challenge discrimination caused by AI. The decision represents a major change in how the EEOC enforces civil rights and dismantles one of the most effective tools for uncovering workplace discrimination, especially as AI-driven hiring makes algorithmic bias more common.

See also: AI for HR: Managing the risks of disparate impact discrimination

The modern workplace is increasingly automated. Whether you work in an office, from home, in a factory or in a store, chances are you interact with algorithmic systems every day. Technology now screens applicants, monitors performance, assigns tasks, sets pay and evaluates productivity. AI itself is not necessarily the problem. The problem is that AI systems learn from flawed data and patterns that are already primed to create harm.

In automated workplaces, discrimination often emerges not through explicit human actions but through black-box tools. An algorithm may not “see” the color of a person’s skin, but it can reliably screen out people of color more often than white applicants. It can depress wages for workers with foreign-sounding names and replicate similar inequities at unprecedented scale.

Resume-screening tools: a prime example

Consider resume-screening tools among the most common automated hiring systems.

A comprehensive study of three leading large language models—including Mistral AI, Salesforce and Contextual AI—found significant racial, gender and intersectional bias in how the models ranked resumes. The LLMs favored white-associated names 85% of the time, female-associated names only 11% of the time and never favored Black male-associated names over white male-associated names. Because these tools are opaque and difficult to challenge case by case, regulatory scrutiny is essential to hold employers accountable.

The EEOC has existed to investigate discrimination and protect workers regardless of race, gender, national origin, age, disability or religion. One of its most important tools has been the disparate impact framework, which allows the agency to address discriminatory patterns even when no individual can point to a single intentional act of bias. By walking away from this framework in cases involving automated systems, the EEOC leaves millions vulnerable right when AI is making more decisions about our work lives than ever before.

Without the ability to bring disparate impact claims, workers will have nowhere to turn when AI systems block them from jobs, promotions or fair pay. Worse, employers will have perverse incentives to rely even more heavily on complex, inscrutable and high-risk AI tools, free from fear of accountability.

Facial recognition another opportunity for misuse

This retreat fits within a broader dismantling of safeguards meant to protect people from discrimination in the face of rapid technological change. Like automated hiring tools, facial recognition systems contain well-documented racial and gender bias, including misidentifying women of color more than one in three times. Yet the Trump administration is embracing these tools while removing essential checks on their use. The federal government’s AI Action Plan prioritizes deregulation while discarding oversight and risk mitigation.

Recent reporting shows that Immigration and Customs Enforcement is using a facial recognition app, Mobile Fortify, in alarming ways that are certain to produce wrongful detentions. Officers are treating face-match results as definitive identifications, even over birth certificates and other contradicting evidence. This runs directly against the government’s own policy that facial recognition matches should be treated only as investigatory leads and never as the sole basis for arrest.

This approach is brazenly reckless. The EEOC’s abandonment of disparate impact claims in AI-related cases is not a small procedural change, but part of a broader trend that prioritizes speed and deregulation over people’s rights and lives. Everyone deserves a workplace free of discrimination and a fair chance to challenge harmful and discriminatory practices, whether the harm comes from a biased human decision or a flawed algorithm.

Workers cannot be expected to compete in an economy where AI systems make the rules and no one is accountable for the outcomes. When a pattern of discrimination exists, workers must be able to challenge the system causing the harm, not just hope a biased model somehow fixes itself. The federal government must restore these protections before irreversible damage is done.

The post Why abandoning disparate impact protections is a disaster in the age of AI appeared first on HR Executive.

📰 Original Source

This article was originally published on HR Executive. Click below to read the complete article.

Read Full Article on HR Executive →