As AI lobbying hits record levels, federal policy is leaning toward flexibility and innovation. This leaves HR leaders to close the risk gap inside their own organizations.
Inside the Beltway, AI is one of the hottest topics in town. Lobbying firms pulled in almost $92 million from AI-related issues in the first three quarters of 2025 alone, per Bloomberg Government analysis, and full-year totals hit nearly $130 million as influence spending accelerated into 2026. Bloomberg reported that tech giants and AI vendors spent over $100 million pushing for light-touch national rules.
For HR leaders, that power play in Washington is not an abstract policy story. It shapes the extent of the organization’s exposure when deploying AI for hiring, performance management, monitoring and workforce planning.
State vs federal regulations
At the federal level, the Trump administration has focused on accelerating U.S. AI dominance and minimizing “cumbersome regulation,” including an executive order aimed at curbing state attempts to regulate AI piecemeal.
That push, backed by industry lobbying, has coincided with the revocation of Biden-era AI directives and the scaling back of EEOC and DOL AI guidance on bias and inclusive hiring, according to a brief from Wiley Reber Law, leaving employers with a thinner federal playbook.
States, however, are not waiting. Legislatures in places like California, Colorado, Illinois and Texas are moving ahead with AI‑specific employment rules that treat tools used in hiring and personnel decisions as high‑risk.
In these jurisdictions, employers are confronting requirements for notices and consent, impact assessments, bias testing and appeal rights for candidates and employees. Some proposals also reach into electronic monitoring, requiring advance written notice before rolling out AI‑enabled productivity tracking or surveillance.
Two types of risk
The result is a widening gap between a permissive, innovation‑driven federal posture and a patchwork of tougher state standards. For CHROs, that gap translates into risk.

First, there is liability. Even without a comprehensive federal AI statute, existing employment and civil‑rights laws already apply to algorithmic decisions.
Regulators and plaintiffs’ attorneys increasingly treat AI‑driven tools as extensions of the employer, not as neutral third‑party systems, Britney Torres, co-chair of Littler’s AI & Technology Practice Group, told HR Executive.
A biased screening model can scale discrimination across thousands of applicants in a way no individual hiring manager ever could. When something goes wrong, it is HR and the employer’s brand on the hook.
Second, there is compliance complexity, according to a brief from Foley & Lardner LLP. HR teams now have to map their workforce footprint against divergent state obligations, align notice and consent workflows, and maintain documentation of how AI influences personnel decisions.
Read more: Congressional witnesses split on AI regulation, state laws stumble
HR as shadow regulator
That is why, in this environment, HR could effectively become the shadow regulator of workplace AI. To play that role credibly, CHROs can move on four fronts:
Inventory and transparency
Catalogue every AI‑assisted tool touching the people lifecycle, from resume screening and video interviews to performance analytics and scheduling, and document what decisions it informs.
Governance and oversight
Establish a cross‑functional AI governance group (HR, legal, IT, data) with clear authority to approve use cases, set guardrails and require human review for high‑stakes decisions.

Asha Palmer, Skillsoft’s SVP of compliance solutions, says she is observing an uptick in cross-departmental collaboration, particularly between HR and IT. “We’re seeing them consolidate their efforts and their resources so that they can get more bang for their buck out in the market and not necessarily sacrifice their quality,” she told HR Executive.
Vendor accountability
Bake transparency, audit cooperation and bias‑mitigation commitments into contracts with HR tech providers. HR leaders should recognize that Washington’s approach will not shield them from state enforcement or private lawsuits.
Data hygiene and workforce impact
Strengthen the org’s ability to track which positions are changed, eliminated or created because of AI. Invest in reskilling strategies that respond to worker and policymaker anxiety about job loss.
The lobbying dollars flowing through Washington may succeed in keeping federal AI rules relatively flexible. But that does not eliminate the expectations of employees, regulators and the public.
For HR leaders, the most prudent response is to assume that AI in the workplace will be judged by the highest standard, and to build an internal governance regime that meets it.
The post AI’s $130M lobbying blitz hands HR the real AI compliance burden appeared first on HR Executive.
This article was originally published on HR Executive. Click below to read the complete article.