Why Your Best Employees Are Quietly Using AI Without Telling You
A few weeks ago, I was running a workshop for a mid-sized professional services firm in Melbourne. During a break, one of the top-performing analysts pulled me aside and said, “I’ve been using Claude to draft my client reports for six months. Please don’t tell my manager.”
She wasn’t doing anything wrong. Her reports were excellent. But she was convinced that if anyone found out, they’d think she was cheating - or worse, that her role would be seen as replaceable.
She’s not alone. Not even close.
The Shadow AI Problem Is Bigger Than You Think
Across every organisation I work with, the pattern is the same. Your best people - the ones who hit deadlines, produce quality work, and consistently over-deliver - are quietly using ChatGPT, Claude, Copilot, and a dozen other AI tools behind the scenes. And they’re not telling anyone.
A 2024 survey from Gartner found that nearly half of employees who use AI at work do so without formal approval or their manager’s knowledge. Some estimates push that number even higher in knowledge-work roles.
Why the secrecy? Three reasons come up again and again in my conversations:
Fear of looking lazy. People worry that admitting they use AI to draft documents or summarise meetings makes them seem like they’re not doing “real work.” There’s a deep anxiety about being seen as taking shortcuts.
Fear of consequences. Many organisations don’t have clear AI usage policies. When people don’t know the rules, they assume the worst. If I use this tool and something goes wrong, am I liable? Will I get disciplined?
Fear of irrelevance. This is the big one. If I show everyone how much AI helps me do my job, does management decide they only need half as many people in my team?
The Risk You Can’t Ignore
Here’s the thing: I actually understand why people hide it. The fears aren’t irrational. But shadow AI adoption creates serious problems that L&D and HR leaders need to address head-on.
Data security. When employees paste client data or internal strategy documents into free AI tools without approved safeguards, your organisation has a data governance problem it doesn’t even know about. If a customer complaint containing personal health or financial details ends up in ChatGPT, you’ve potentially breached the Australian Privacy Act.
Inconsistent quality. Without guidelines, every person is using AI differently. Some are fact-checking outputs carefully. Others are hitting paste and moving on. You end up with wildly uneven quality and no way to spot it.
Missed learning opportunities. If your best people are figuring out AI workflows in isolation, the rest of your workforce is falling behind. The knowledge stays siloed. The gap grows.
What This Tells You (If You’re Willing to Listen)
Here’s the part that most commentary on shadow AI misses: the behaviour is a signal, not just a risk.
When employees adopt AI tools on their own, without training, without support, and despite the fear of getting caught, that tells you something powerful about where the demand is. These people have identified genuine pain points in their work and found tools that help. That’s exactly the kind of initiative most organisations claim they want.
Instead of cracking down, L&D leaders should be asking: what are they using? What problems are they solving? And how do we turn this underground innovation into organisational capability?
The Australian HR Institute has been urging organisations to develop clear, practical AI policies for over a year now. Yet many Australian businesses still haven’t published internal guidelines. That vacuum is where shadow AI thrives.
What to Do About It
If you’re an L&D leader, a HR director, or a people manager reading this, here’s my advice:
Create psychological safety first. Before you write a policy, have honest conversations. Run anonymous surveys asking what tools people already use and what they’d use if they could. Make it clear that you’re not looking to punish - you’re looking to understand.
Write a practical AI use policy. Not a 40-page legal document. A clear, readable guide that tells people: these tools are approved, these are the rules around data, and here’s who to ask if you’re unsure. Two pages. Maybe three.
Turn your shadow users into champions. That analyst in Melbourne? She should be running an internal workshop, not hiding in the corridor. Find the people who’ve already figured it out and give them a platform to teach others. Peer learning beats vendor-led training almost every time.
Invest in approved tools. If your people are using free versions of AI tools because you haven’t provided enterprise options, that’s on you. Approved, secure tools with proper data handling remove the biggest risk factor overnight.
The Bottom Line
Shadow AI isn’t going away. The tools are too good, too accessible, and too useful for people to stop using them just because there’s no official policy. Every week you wait to address this, the gap between what your people are actually doing and what you think they’re doing gets wider.
Your best employees are already showing you where AI can make a difference. The question is whether you’ll meet them there or keep pretending it isn’t happening.
I know which option I’d choose.