Dayforce top AI exec’s advice on closing the readiness gap

Nearly two out of three business leaders say they see data and AI skills gaps in their organization, according to a new survey of business leaders out of the U.S. and the U.K. from learning platform DataCamp. Yet less than half say their organizations provide even basic AI literacy training. And among those who have invested in AI, a significant share report they’ve yet to see a positive return.

That paradox—high expectations, low readiness, elusive ROI—is exactly what David Lloyd, chief AI officer at Dayforce, navigates every day.

Lloyd has spent more than 20 years in AI, long enough to remember building conversational AI in 2004 that could understand user intent but could only respond with fixed answers rather than generative ones. From that vantage point, what strikes him now isn’t the pace of technological change. It’s the gap between what organizations could be doing and what they’re actually doing.

David Lloyd, chief AI officer, Dayforce
David Lloyd, Dayforce

“Most organizations don’t have a plan,” he said. “They’re trying to get a handle on the trust side, the ethics side, the literacy side. And then there’s the value side—and if you think AI is just about summarizing documents, you’ve missed the boat…you’re not going to change the way the business operates.”

Two camps, one divide for the AI readiness gap

Lloyd sees organizations sorting into two broad groups. The first is sprinkling AI on top of existing processes for incremental gains, familiar workflows and modest returns. The second, much smaller group is asking a harder question: What would this look like if we rebuilt it with AI at the center?

He prefers the term “AI fluent” to “AI native” to describe where organizations need. “A lot of people have to learn the language,” he said. “But once they’re fluent in it, that’s great, because they have experience and they’re fluent, versus just being native but maybe not experienced.”

The distinction matters because fluency implies deliberateness. And deliberateness, Lloyd argues, is exactly what’s missing in most AI rollouts.

Read more: Dayforce execs reveal how AI boosts employee career growth

The bowling bumper framework

To explain what effective AI governance looks like inside an organization, Lloyd returns to a bowling analogy. Imagine taking a child bowling with no bumpers. Most balls go straight into the gutter. Add the bumpers, and suddenly the child is learning, every roll stays in play, every throw builds skill.

“Most organizations need that component of guardrails,” he said. “To help employees know where the edges are and help the organization understand and apply AI in a way that they’re going to learn a lot more out of it.”

Without those guardrails, you get what Lloyd calls “shadow AI,” employees adopting tools on their own, vendors pushing new capabilities without organizational buy-in and a general sense that anything goes. “All of a sudden it pops up out of nowhere and the organization is like, ‘surprise, you have a new AI tool,’ ” he said. “That should never happen to a [Dayforce] customer. They should be deliberately involved in turning it on, not having to turn it off.”

The data underneath it all for the AI readiness gap

One of the most consistent barriers Lloyd hears about is data quality. A CIO he spoke with recently at a summit in Los Angeles described exactly this challenge: an organization with clean, trusted data inside its HCM platform, surrounded by an ecosystem of data it was struggling to standardize and trust.

“To use some basic AI capabilities doesn’t require the cleanest of data, depending on what you’re doing,” Lloyd said. “But if you’re actually starting to make decisions around it, you’ve got to be able to trust it.”

Building the ethics infrastructure

Beyond data and governance frameworks, Lloyd has overseen a more formal architecture for responsible AI at Dayforce: an external AI Ethics Council, launched in September 2025. This is made up of five outside experts from fields spanning AI governance, regulatory compliance, data ethics and human capital consulting. The council’s mandate is rooted in a question Lloyd returns to repeatedly: not just “can we” apply AI in a given context, but “should we.”

“You have some of the most trusted data in the world at your disposal,” he said, speaking to CHROs directly. “That is both the benefit and the problem.”

Dayforce has also pursued ISO 42001 certification, one of a small number of companies globally to hold it, as a way of demonstrating that its AI controls are auditable, consistent and defensible. The designation exhibits an organization’s ability to govern AI responsibly and ethically. “It tells you your controls are right, that you’re handling AI the right way with data, that you have ways of ensuring at an enterprise-wide level that you’re really taking care of it,” Lloyd said. “When customers know that, the value piece is a lot easier to tackle.”

Read more: Need a chief AI officer? Look to these British orgs that have made the hire

A changed mindset

For HR leaders who haven’t yet formalized their approach to AI, Lloyd has a concrete starting point. He says to build a small, tight cross-functional group—perhaps the CHRO, CIO, legal or privacy counsel and finance—to evaluate AI use cases together. This doesn’t need to be a large committee, but a working group with authority to ask the hard questions.

“If that’s not in place and it’s just the wild west, you’re going to have vendors everywhere,” he said. “Shadow AI is going to appear in many different places in the organization.”
From there, it’s time to move the organization toward literacy and create a safe environment for employees to experiment.

To illustrate why AI literacy is no longer optional, Lloyd described a recent audience exercise. He asked roughly 150 attendees to raise their hands if they would hire someone today who didn’t know how to use Excel or email.

No hands.

“So, why would AI be any different?” he said. “Change your mindset and ask yourself: Is this another capability we have to bring into our toolbox? It has a potentially big impact, but it is another piece of capability to bring in and look toward mastering.”

The workforce disruption question

Even with a reframed mindset, the disruption question doesn’t disappear. Lloyd acknowledges it directly. He cites research suggesting that a large portion of global jobs are exposed to changes from AI and that the majority of organizations feel their workforce isn’t prepared. “That’s what the CHRO has sitting above their head right now,” he said.

His framework for understanding the disruption isn’t elimination, it’s compression. AI enables individuals to absorb capabilities that previously required specialists. He uses the image of a perimeter of skills surrounding any given role. As AI makes those adjacent skills accessible, the perimeter shrinks inward. Work that once required three analysts may require one, plus a more capable AI.

“It’s not that I think that’s a bad thing,” Lloyd said. “It frees us up to look at other problems. Creative destruction means yes, some things are destroyed and new things are created.”

For organizations that want to grow without shedding headcount, he sees another path: using AI to expand velocity and market reach rather than reduce costs. “If I can increase my velocity to go after 20% more of the market with the same team, that’s higher gross margins, higher revenue, higher free cash flow—all the things that propel a business.”

After more than two decades in AI, Lloyd said the most satisfying part of his current role isn’t building the technology. It’s what happens when he gets to explain it to boards, customers and prospects who walk away with a framework they can actually use.
“We’ve gotten to teach, not sell,” he said. “And to me, that’s just the highlight of my day.”

The post Dayforce top AI exec’s advice on closing the readiness gap appeared first on HR Executive.

📰 Original Source

This article was originally published on HR Executive. Click below to read the complete article.

Read Full Article on HR Executive →