Your Employees Are Faking AI Competency. Here's Why That's Understandable.


During a recent AI training session, I watched a room full of professionals nodding along to explanations of prompt engineering. Everyone seemed to understand. Everyone took notes.

Later, when they tried to apply the concepts independently, most were completely lost.

They’d been pretending. And honestly, I don’t blame them.

The Competency Performance

There’s enormous pressure right now for employees to appear AI-savvy. Job security anxieties are real. Nobody wants to be the person who “doesn’t get it” when their manager is excited about the new AI tools.

So people nod. They use AI buzzwords in meetings. They say “I’ve been playing with ChatGPT” without mentioning they’ve only used it to draft one email.

This is a rational response to an irrational environment. We’ve created workplaces where admitting you don’t understand something feels professionally dangerous.

Why Standard Training Fails Here

Most corporate AI training assumes people will ask questions when confused. They won’t. Not when they’re worried that asking reveals them as behind the curve.

Standard approaches also treat AI literacy as a single skill to be acquired in a workshop. It’s not. It’s a practice that develops through regular use, experimentation, and—crucially—making mistakes in a safe environment.

One-day training sessions teach vocabulary. They don’t build actual capability.

What’s Actually Working

The L&D teams having success with AI upskilling are creating conditions where experimentation is normal and visible confusion is acceptable.

Peer learning groups work better than expert-led training. When people learn alongside colleagues at similar skill levels, they’re more comfortable admitting what they don’t know. The facilitator becomes a resource, not a judge.

Small, ongoing touchpoints beat intensive workshops. A 15-minute weekly practice session where people share what worked and what didn’t is more effective than an all-day training everyone forgets by the following week.

Leaders modelling uncertainty helps enormously. When senior managers openly say “I tried this prompt and it didn’t work well—anyone have ideas?” it signals that not knowing everything is acceptable.

The Psychological Safety Connection

This comes back to psychological safety, which isn’t just an HR buzzword. It’s the foundation of genuine skill development.

The AHRI research on learning cultures consistently shows that employees learn better when they feel safe to fail. That’s doubly true for AI tools, where the learning curve involves frequent trial and error.

If your organisation punishes visible incompetence—even subtly, through eyerolls or exclusion from projects—people will fake competency rather than develop it.

Practical Adjustments

Reframe training objectives. Don’t measure success by how many people “completed” the AI training. Measure by actual tool adoption rates six months later.

Create explicit practice time. “Play around with the tools” isn’t a clear directive. “Spend 30 minutes this week using AI to [specific task], and bring your results to Friday’s session” is.

Separate learning spaces from performance spaces. The workshop room shouldn’t feel like a performance review. Different framing, different energy.

Address job security directly. If you’re not planning layoffs related to AI adoption, say so clearly. If you are, be honest about that too. Uncertainty breeds defensive behaviour.

The Bigger Picture

We’re asking workers to rapidly adopt technologies that legitimately challenge how they think about their skills and value. Some anxiety is inevitable.

The organisations that handle this transition well won’t be the ones with the best training programmes. They’ll be the ones that create environments where people can learn at their own pace without pretending to be somewhere they’re not.

Faking competency is a symptom. Psychological safety is the treatment.