A lawsuit over AI notetakers should be on every HR leader’s radar

The noteakers are ready to start the meeting, but will they always be so welcome? A class action lawsuit targeting one of the most widely used AI transcription tools is drawing new attention to a compliance gap for HR teams to worry about: the legal risks of AI-powered meeting notetakers in the workplace.

In re Otter.AI Privacy Litigation, now a consolidated case before Judge Eumi K. Lee in the U.S. District Court for the Northern District of California, alleges that Otter.ai’s notetaking tools recorded private conversations without the consent of all participants and used those recordings to train its AI models without adequate disclosure. No substantive rulings have been issued yet, but employment attorneys say the case is already signaling where liability could land for employers.

“The AI transcription and recording issue is a hot issue,” says Bradford Kelley, a shareholder at Littler Mendelson who co-authored a February 2026 analysis of the litigation. He told HR Executive that human resource teams should be “very interested in this case.”

Kelley says that his firm gets quite a few questions from employers operating in states that have all-party consent: “What do we need to do to make sure we’re in line with best practices?”

Federal wiretap laws and AI notetakers

According to Littler, federal wiretap law and most state counterparts follow a one‑party consent rule. Still, approximately a dozen states require all participants to consent to the interception or recording of a conversation. A single virtual meeting that includes employees, customers or candidates in multiple jurisdictions can therefore trigger overlapping and sometimes inconsistent consent obligations that many employers have not fully mapped, according to Littler.

Under the federal Wiretap Act, Littler points out that private plaintiffs may seek statutory damages calculated as the greater of a per‑day amount or a minimum statutory award. At the same time, Illinois’ Biometric Information Privacy Act (BIPA) authorizes statutory damages for improper collection or use of biometric identifiers, including when AI note‑taking tools identify individual speakers by their voiceprints.

Read more: Congressional witnesses split on AI regulation, state laws stumble

Risk exposure areas

Kelley and co-author Zoe Argento, also a Littler shareholder, outline seven risk areas employers should evaluate: consent, biometrics, accuracy, discrimination and disparate impact, attorney-client privilege, data retention and confidentiality. The breadth of that list reflects just how many legal frameworks a single AI notetaker can activate at once.

On the discrimination front, the attorneys flag that AI transcription tools may consistently misunderstand accents, speech impediments or other characteristics tied to protected classes. This can create disparate impact exposure if those transcripts inform performance reviews, hiring decisions or disciplinary actions. Employers using these tools in employment decision-making may also trigger AI-specific notice and audit requirements in jurisdictions including New York City, Illinois and California.

Compliance for multinational employers

For multinational employers, the compliance picture grows significantly more complex. Under the GDPR, the consent standard is a different and more demanding consent framework than typical U.S. consent rules for call recording. Valid consent must be freely given, specific and unambiguous from each individual whose data is processed. This means a model that relies on one meeting participant to authorize recording on behalf of all others would likely not satisfy the regulations.

Data transfer is a compounding issue, since recordings processed by U.S.-based vendors must comply with international transfer mechanisms such as Standard Contractual Clauses. And beginning in August 2026, the EU AI Act introduces a separate layer of obligation. AI systems used for worker monitoring and management may be classified as high-risk, a category that could encompass tools offering sentiment analytics or productivity scoring alongside transcription.

In co-determination countries such as Germany and France, deploying an AI notetaker may also require works council consultation before rollout, a requirement with no U.S. equivalent that multinational HR teams frequently overlook.

What HR leaders can do about AI notetakers

The Littler analysis raises a practical reality that HR leaders may find uncomfortable: Banning AI notetakers outright is likely unenforceable. One in five professionals reported frequently using AI to draft meeting notes in a 2025 survey, and employees are bringing these tools in whether or not employers have addressed them.

The attorneys’ recommendation is to get ahead of it. Select, configure and control a vetted tool rather than cede that ground to whatever employees happen to download.

That means vetting vendors on data security and configuration options, turning off features like voice recognition where biometric risk is high, setting up consent notices before meetings, establishing short data retention periods and building clear policies on when and where AI notetakers are permitted.

The Otter.ai litigation has not yet produced rulings that set binding precedent. But Kelley says HR leaders should be paying attention now before courts define the boundaries for them.

 

The post A lawsuit over AI notetakers should be on every HR leader’s radar appeared first on HR Executive.

📰 Original Source

This article was originally published on HR Executive. Click below to read the complete article.

Read Full Article on HR Executive →