Running an Effective AI Upskilling Program: Lessons from the Field


I’ve designed and delivered AI upskilling programs for organisations ranging from 50 people to 50,000. Some worked brilliantly. Others were expensive failures. The difference usually wasn’t the content—it was the approach.

After years of trial and error, I’ve identified the factors that separate effective AI upskilling from corporate theatre. If you’re planning a program, here’s what actually matters.

Start with the Business Problem

The worst AI training programs start with “we should train people on AI” and work backwards from there. The best ones start with a specific business problem and ask whether AI skills could help solve it.

Bad: “Let’s offer ChatGPT training to all staff” Good: “Our customer response times are too slow. Could AI help our service team work faster? If so, what skills would they need?”

This isn’t just semantics. When you start with the business problem, you can:

  • Define clear success metrics
  • Target the right audience
  • Design relevant content
  • Justify the investment
  • Measure actual impact

Generic AI awareness programs might make people feel good, but they rarely change how work gets done. Problem-focused programs do.

Segment Your Audience Ruthlessly

Not everyone needs the same training. I typically segment by:

Role relevance: How central is AI to this person’s work? A content marketer needs deep fluency. A warehouse operative might need basic awareness only.

Starting capability: What do people already know? Over-training wastes time and insults intelligence. Under-training creates frustration.

Learning preferences: Some people learn best in workshops. Others prefer self-paced materials. Some need hands-on practice. Your program should accommodate different styles.

Change readiness: Some people are eager to learn. Others are anxious or resistant. These groups need different messaging and support.

Design different pathways for different segments. A one-size-fits-all program fits no one well.

Make It About Their Work, Not the Technology

People don’t care about AI as an abstract concept. They care about their tasks, their challenges, their goals.

Every training element should connect to real work scenarios. When teaching prompt engineering, use examples from participants’ actual jobs. When demonstrating capabilities, show solutions to problems they actually face.

I once ran a program where we spent the first hour having participants list their most tedious recurring tasks. Then we spent the rest of the day showing them how AI could help with those specific tasks. Engagement was through the roof because the content was immediately relevant.

Build in Substantial Practice Time

The biggest mistake in AI training is too much presentation and not enough practice.

AI fluency develops through experimentation, not instruction. People need to try things, fail, adjust, and try again. They need to encounter the tool’s limitations firsthand. They need to develop intuition for what works and what doesn’t.

My rough guide: at least 50% of training time should be hands-on practice. For a full-day workshop, that’s a minimum of three hours of active experimentation.

And the practice shouldn’t be “follow along with the instructor.” It should be “here’s a challenge, figure out how to solve it, share what worked.”

Address the Fear Directly

In every AI training session, there are people in the room worried about their jobs. Ignoring this doesn’t make it go away—it just makes people tune out.

Set aside time early in the program to acknowledge concerns explicitly. Let people voice their anxieties. Provide honest answers, not corporate talking points.

The honest answer, by the way, is usually nuanced. “No one’s job is being eliminated next quarter because of this” can be true and reassuring. “AI will never affect your role” is probably false and people know it.

What people need to hear is that developing AI skills is the best way to remain valuable. That the organisation is investing in their capability, not replacing them. That human judgment and expertise still matter enormously.

Create Post-Training Support Structures

Training events are just the beginning. Real capability development happens in the weeks and months that follow.

Plan for:

Practice opportunities: How will people continue using these skills after training? If they go back to jobs where AI isn’t integrated, everything they learned will fade.

Peer learning: Create communities of practice where people can share tips, troubleshoot problems, and learn from each other. These are often more valuable than formal training.

Manager reinforcement: Brief managers on what their people learned and how they can support application. A manager who asks “have you tried using AI for that?” is worth ten training sessions.

Ongoing resources: Job aids, prompt libraries, FAQ documents, office hours with experts. Make it easy to get help when stuck.

Measure Beyond Satisfaction

Most training evaluation stops at “did participants like it?” This tells you almost nothing about whether the program achieved its goals.

Using Kirkpatrick’s model, which remains the industry standard for training evaluation:

Level 1 (Reaction): Yes, measure satisfaction. But don’t stop there.

Level 2 (Learning): Did people actually develop new capabilities? Pre and post assessments can measure this.

Level 3 (Behaviour): Are people using these skills in their work? This requires follow-up observation or self-reporting after a delay.

Level 4 (Results): Is the business problem you started with actually improving? This is the ultimate measure of success.

Most organisations measure Level 1 only. The good ones get to Level 2. The great ones measure all four levels.

Watch for Common Failure Modes

Programs typically fail for predictable reasons:

Over-promising capabilities: When training sets unrealistic expectations, people become frustrated and disengage when reality doesn’t match the hype.

Insufficient practice time: All presentation, no hands-on work. People leave feeling informed but not capable.

Generic content: Training that isn’t tailored to participants’ actual work. Interesting but not useful.

No follow-through: Excellent training followed by no support for application. Skills decay rapidly.

Wrong audience targeting: Training people who can’t or won’t use AI in their roles. Wasted investment.

Ignoring resistance: Plowing ahead without addressing legitimate concerns. People physically present but mentally absent.

The Role of External Support

Sometimes internal teams can run effective AI upskilling programs. Sometimes external expertise adds value.

Consider external support when:

  • Your internal team lacks AI expertise
  • You need credibility that comes from outside perspective
  • The volume of training exceeds internal capacity
  • You want programs that incorporate external best practices

But don’t outsource entirely. Internal ownership ensures programs stay relevant to your specific context and continue after the external engagement ends.

A Realistic Timeline

Effective AI upskilling isn’t a one-time event. Plan for:

Months 1-2: Assess current state, define objectives, segment audience, design program

Months 3-4: Pilot with a small group, gather feedback, refine

Months 5-8: Roll out to broader population in waves

Ongoing: Support application, measure impact, iterate based on results

Rushing this timeline usually produces programs that feel polished but don’t create lasting change.

The Bottom Line

AI upskilling done well transforms how organisations work. Done poorly, it’s an expensive way to check a box.

The difference comes down to taking it seriously: starting with real business needs, designing for your specific workforce, building in substantial practice, addressing human concerns, and supporting people long after the training ends.

It’s more work than ordering an off-the-shelf course. But it’s the only approach that actually develops capability.