What an employment attorney wants HR leaders to know about AI risk

Imagine an employee pastes client information into ChatGPT to get a quick summary, with no malicious intent. It’s a worker treating an AI tool like any other productivity shortcut, without insight into what happens to the data on the other side.

It’s exactly the kind of scenario that employment attorney Tara Humma warns has legal exposure for employers. Humma, who advises multi-state employers at Rimon Law, says often risk comes from well-meaning employees who simply haven’t been told where the lines are.

AI compliance: ‘The law says what it says’

But that’s not an excuse for the organization if something goes wrong. “The law says what it says,” Humma says. “It says you can’t discriminate, you have to protect confidential information. It doesn’t matter what tool you use to break the law.”

Whether an employee posts patient data on social media (which sounds crazy) or pastes it into an open-source AI tool (which sounds less crazy, but is equally dangerous), the confidentiality violation is the same. Intent doesn’t change the exposure.

Let’s look at some of the most protected data, health information. Despite the fact that this is well-known territory for HR teams, there are still privacy shortfalls. Since the HIPAA Privacy Rule took effect, federal regulators have received more than 366,000 complaints and imposed nearly $144 million in penalties in 147 cases, often for failures to protect patient information. In 2024 alone, covered entities reported 725 large healthcare data breaches, and the Office for Civil Rights closed 22 investigations with financial penalties, according to the U.S. Department of Health and Human Services.

Governance is a compliance move

Since then, the proliferation of consumer-level AI tools has only grown. This framing shifts AI governance from a technology conversation to a compliance one, and raises the stakes on having a policy that actually does something.

One instance that is directly related to HR teams is a recent case in which the EEOC alleged that an employer’s screening software was programmed to automatically reject women over 55 and men over 60. This was said to have led to more than 200 qualified applicants being turned away because of age, and the resulting consent decree included $365,000 in monetary relief and years of monitoring and training obligations, according to the EEOC.

An employment attorney's advice on AI compliance for employers
Tara Humma, Rimon Law

Humma says most early attempts at AI use policies fall short in the same ways. They’re too vague to be useful. Telling employees not to share “confidential information” with AI tools isn’t enough if workers don’t understand what that means in the context of their day-to-day work.

A functional policy, she says, needs to name the specific categories of information that can’t go into open-source tools and explain why in terms employees recognize.

Read more: When AI lobby money flows, HR becomes the regulator

Regulations, discrimination and other hot spots

Regulated industries face a steeper climb. In healthcare, legal and financial services, confidentiality requirements are already strict, and AI introduces new ways to breach them without anyone realizing it. Attorneys advise that those employers should be moving faster than most.

Regulations can be a moving goal post for employers who think they’ve already addressed the issue. States, including Illinois, have passed laws specifically governing AI in employment decisions, and experts expect more to come.

Illinois has now amended its Human Rights Act to make algorithmic discrimination an actionable civil rights violation, barring employers from using AI tools that have a discriminatory effect on protected classes in hiring and other employment decisions and requiring notice when AI is used in those decisions.

EEOC guidance holds employers responsible if an AI tool introduces bias into hiring or performance decisions, even if the employer didn’t build the tool. And some legislation reaches further than expected, covering tools employers have used for years, such as applicant screening software and personality assessments, without ever labeling them as AI.

“Try to get ahead of it as much as possible,” Humma says, “so that your company is not the next headline.”

 

The post What an employment attorney wants HR leaders to know about AI risk appeared first on HR Executive.

📰 Original Source

This article was originally published on HR Executive. Click below to read the complete article.

Read Full Article on HR Executive →