When AI Training Isn't the Answer


My phone rings. It’s an HR director at a mid-sized company. “Our AI adoption is stalling. We need better training. Can you help?”

I’ve learned to ask follow-up questions before agreeing. What does adoption look like now? What barriers are people encountering? What’s the work environment like?

Forty minutes later, I understand the real problem: people don’t have access to the AI tools. The IT security review has been stuck for six months. No amount of training will fix that.

This happens constantly. Organisations assume that capability gaps explain AI adoption problems, so they reach for training solutions. But often the real barriers are elsewhere.

Before investing in AI training, make sure training is actually the answer.

The Training Reflex

L&D professionals (myself included) have a natural bias toward training solutions. It’s our domain. We’re good at it. And organisations often ask us to solve problems via training.

But the research is clear: a large proportion of performance problems are caused by factors other than skill deficiency. Research summarised by the Association for Talent Development suggests that environmental factors cause as much as 75% of performance issues. Common non-training causes include:

  • Inadequate tools or technology
  • Unclear expectations or priorities
  • Misaligned incentives
  • Insufficient time or resources
  • Poor management
  • Organisational barriers
  • Cultural resistance

Training can’t fix these problems. It can only build capability in people who already have what they need to perform.

Diagnosing the Real Problem

When AI adoption stalls, systematic diagnosis beats jumping to solutions.

The Performance Analysis Questions

Ask these questions before recommending training:

Do people have access to the tools? This sounds basic, but it’s often the issue. Procurement delays, security reviews, licensing restrictions, and IT provisioning create barriers that training can’t overcome.

Do people have time to learn and apply new skills? If workloads leave no space for experimentation, capability can’t develop regardless of training quality. People need slack in their schedules to try new approaches.

Are expectations clear? Do people know that AI adoption is expected? Do they know what specific applications they should be using? Vague intentions don’t drive behaviour change.

Are incentives aligned? Is AI adoption rewarded? Is resistance penalised? If people are measured entirely on traditional metrics, they’ll rationally prioritise traditional approaches.

Is the environment supportive? Do managers encourage experimentation? Is it safe to try and fail? Are there peers to learn from? Culture determines more than curriculum.

Is the technology actually useful? Sometimes the AI tools don’t actually help with the work people do. No training fixes a bad tool-task match.

The Capability Questions

Only if environmental factors aren’t the problem should you explore capability gaps:

Do people know what to do? This is knowledge. If people understand conceptually how to use AI but can’t apply it, they need skill development. If they don’t understand at all, they need foundational knowledge.

Can people do it if they try? This is skill. Someone might understand prompting concepts but lack the ability to execute effectively. Practice builds ability.

Do people know when to do it? This is discrimination. People might have knowledge and skill but not recognise when to apply AI versus traditional approaches. Judgment develops through experience.

If the answer to all three is yes, training isn’t the answer.

Common Non-Training Barriers

Here are the barriers I encounter most often:

Technology Access

People can’t use tools they can’t access. Check:

  • Has software been procured and licensed?
  • Are accounts provisioned for all relevant staff?
  • Do security settings allow appropriate access?
  • Is the technology reliable and performant?

Solution: Work with IT and procurement to resolve access barriers.

Time and Workload

Learning takes time. Applying new skills takes time. If people are at full capacity with existing work, nothing new can fit.

Solution: Explicitly protect time for AI learning and experimentation. Reduce other demands to create space.

Managerial Resistance

If managers don’t support AI adoption—whether through active resistance or passive neglect—their teams won’t adopt.

Solution: Address manager concerns, develop manager capability, align manager incentives.

Unclear Expectations

People can’t meet expectations they don’t understand. Generic “be more AI-savvy” isn’t actionable.

Solution: Specify what AI applications are expected for which roles. Clarify what good looks like.

Misaligned Metrics

People optimise for what they’re measured on. If metrics don’t account for AI adoption, it won’t be prioritised.

Solution: Adjust performance measurement to include AI capability and application.

Change Fatigue

If the organisation has been through multiple major changes, people may be exhausted. Another change initiative, even a beneficial one, faces resistance.

Solution: Acknowledge fatigue, pace change appropriately, demonstrate value quickly.

Legitimate Concerns

Sometimes people have legitimate reasons for hesitancy: data privacy concerns, quality worries, ethical questions. These deserve engagement, not dismissal.

Solution: Address concerns substantively. Where concerns are valid, modify the approach.

When Training IS the Answer

Training is appropriate when:

  • Environmental factors have been addressed
  • People have access, time, support, and incentives
  • The barrier is genuinely capability—knowledge or skill
  • The required capability can be developed through instruction and practice

In these cases, good training can have significant impact. But training deployed into an unsupportive environment is largely wasted.

A Diagnostic Process

Before approving AI training requests, run this process:

Step 1: Clarify the performance gap What specifically should people be doing that they’re not doing? Get concrete.

Step 2: Rule out environmental factors Work through the checklist: access, time, expectations, incentives, management, culture. If any of these are barriers, address them first.

Step 3: Assess capability gaps If environmental factors are addressed, what capability gaps remain? Knowledge? Skill? Judgment?

Step 4: Determine training appropriateness Can training address these gaps? Will people have opportunity to apply what they learn? Is the timing right?

Step 5: Design and deliver Only now design and deliver training, knowing that conditions for success are in place.

The Political Challenge

I won’t pretend this is easy. When executives demand training, pushing back requires courage. When you’re in L&D, identifying non-training solutions can feel like advocating yourself out of work.

But recommending training when training isn’t the answer sets you up for failure. The training will be delivered. Adoption won’t improve. And you’ll be blamed for the ineffective program.

Better to diagnose accurately upfront and advocate for whatever interventions will actually work—training or otherwise. That builds credibility over time, even if individual conversations are difficult.

Collaborating Across Functions

Most non-training barriers require collaboration with other functions:

  • IT and security for technology access
  • HR for performance management alignment
  • Operations for workload management
  • Leadership for cultural change
  • Finance for resource allocation

L&D alone can’t solve these problems. But L&D can diagnose accurately and advocate for comprehensive solutions.

Position yourself as a performance partner, not just a training provider. Help the organisation understand what’s really needed, then contribute your piece of the solution.

The Bottom Line

AI adoption problems are often mistaken for training problems. Before investing in AI training, diagnose whether training is actually the answer.

If people lack access, time, support, or incentives, training won’t help. Address environmental barriers first.

If environmental factors are addressed but capability gaps remain, training can be highly effective.

The goal isn’t to deliver training. The goal is to improve AI adoption. Sometimes training is the path. Sometimes it isn’t.

Accurate diagnosis is where effective solutions begin.