Culture over code: 5 strategies for driving responsible AI adoption

Three years ago, when tools like ChatGPT and Copilot exploded onto the scene, the immediate reaction in boardrooms everywhere was a mix of “How do we use this?” and “How do we stop our people from accidentally leaking our secrets to this?”

While Stefanini has been a pioneer in AI for more than 14 years, not every employee across our global portfolio was working closely enough with AI three years ago to adopt these revolutionary tools immediately. As Stefanini created a suite of vetted, specialized AI tools for internal use, we realized that HR needed to spearhead the creation of a culture that sees the potential in AI across each department.

We didn’t get everything right on day one. We had to pivot, rethink our training and have some difficult conversations internally. But through that process, we learned that driving responsible AI adoption is about moving people from a place of fear and uncertainty to one of confidence.

See also: Eva Sage-Gavin: The 5 elements of responsible leadership

Here is how we approached that shift and what we learned along the way.

The elephant in the room: ‘Will AI replace me?’

You cannot have a productive conversation about AI adoption until you address the elephant in the room. When employees hear “efficiency” and “automation,” they often think “redundancy.”

We found that ignoring this fear just breeds resistance. We had to be incredibly transparent about what AI was there to do—and what it wasn’t. Doing that changed the narrative from replacement to “upskilling.”

Take our talent acquisition team as an example. When we first introduced AI tools for recruiting, there was natural hesitation. Were we trying to automate the recruiter out of the process?

We had to sit down and look at the actual workflow. A recruiter spends hours manually screening resumes, often giving each one only 30 seconds of attention because of the sheer volume. We showed them how our internal AI tools could handle that initial screening against job descriptions in seconds—not to make the decision, but to surface the data so the recruiter could spend their time actually talking to candidates.

Once they saw that the tool wasn’t taking their job, but rather the tedious administrative work they hated, the buy-in happened naturally. Now, our recruiters are some of our heaviest users because they realized AI gave them their time back.

Taking AI adoption from ‘don’t you dare’ to ‘here’s how’

In the beginning, our policy stance—like many companies—was defensive. We were worried about security, data privacy and the “black box” of public AI tools. But we quickly realized that a strict ban doesn’t stop people from using AI, it just pushes them into the shadows. People will use the tools that make their life easier, whether you sanction them or not.

We had to shift our mindset from policing to “sandboxing.” Working with our VP of Innovation, we realized we needed to give employees a safe place to play. We moved away from a culture of “don’t touch that” to one of guided experimentation.

We created internal, private instances of these tools—safe environments where company data remained secure. But we also attached a crucial caveat to this freedom: the “human in the loop” rule.

We made it explicitly clear in our policies that while we encourage experimentation, the employee is ultimately responsible for the work product. If the AI hallucinates or makes a bias error, you cannot blame the bot. You are the editor. This balance—giving them the freedom to explore but keeping the accountability with the human—was the turning point for responsible adoption.

Training: Moving beyond the ‘lunch and learn’

Early on, I’ll admit that some of our training was reactive. We would see a security “oops” or a misuse of a tool and we’d rush to correct it. We realized pretty quickly that reactive training doesn’t build competence. We also learned that generic training falls flat. Sending an employee a link to an “Intro to AI” video on LinkedIn Learning is fine for basics, but it doesn’t help them do their specific job.

We started finding success when we made the training contextual. We leveraged our “SAI Library”—our internal suite of AI tools—and began showing specific departments exactly how it applied to them.

For our software developers, the training was about code documentation. For HR, it was about drafting communications or analyzing engagement survey data. We stopped trying to make everyone an AI expert and started trying to make them experts in using AI for their specific role.

The power of peer influence

Perhaps the biggest lesson we learned is that employees don’t always want to listen to leadership or IT. They’re influenced by each other. To get real traction, we launched “AI Week.” Instead of just having executives lecture the staff, we opened the floor to experts from different business units.

There is something powerful about seeing a peer from a neighboring department get up and say, “Hey, I used this prompt to solve this problem, and it saved me three hours.” It turns the abstract concept of innovation into something tangible.

We also leaned into an ambassador program. We identified the “super users”—those who were naturally curious and experimenting on their own—and gave them a platform. These ambassadors bridge the gap between technical possibility and daily reality.

Modeling from the top

Finally, none of this works if the C-suite is exempt. If leadership views AI as a tool for “the workers” to increase productivity, but not something they need to learn themselves, the initiative will die on the vine.

We made a concerted effort to ensure our leadership team was visible in their adoption. When a CEO stands up in a town hall and admits they used AI to help draft a memo or analyze a report—and crucially, when they admit they had to double-check the output—it gives the rest of the organization permission to be curious. It signals that we are all learning this together.

The human element remains in AI adoption

As HR leaders, our job in this era of AI isn’t to be technical wizards. We have IT teams for that. Our job is to manage the human reaction to the change.

The technology will change next month, and again six months after that. A prompt that works today might be obsolete tomorrow. But the human need for psychological safety, for clear boundaries and for a sense of purpose in their work remains constant.

If we can build a culture that values curiosity over compliance and safety over speed, we can thrive in the age of AI.

The post Culture over code: 5 strategies for driving responsible AI adoption appeared first on HR Executive.

📰 Original Source

This article was originally published on HR Executive. Click below to read the complete article.

Read Full Article on HR Executive →