How to Run an AI Skills Audit Across Your Organisation


Last month, I sat in a boardroom while an executive declared we needed “everyone trained on AI by end of quarter.” When I asked what skills specifically, I got blank stares. Sound familiar?

You can’t train what you haven’t measured. Before you roll out another LinkedIn Learning subscription or book an expensive workshop, you need to know where your people actually stand with AI. Here’s how to run a proper skills audit that gives you actionable data, not just tick-box compliance.

Step 1: Define What AI Skills Actually Mean for Your Business

Don’t start with a survey. Start with clarity.

Sit down with department heads and map out which AI capabilities matter for different roles. A finance analyst needs different skills than a customer service team leader. I typically break it into three categories:

AI literacy - Understanding what AI is, isn’t, and how it impacts their work (everyone needs this)

AI application - Using AI tools in daily tasks like drafting emails, analysing data, or creating content (role-dependent)

AI strategy - Making decisions about AI implementation, evaluating vendors, managing AI projects (leadership and specialists)

For each role family, identify 3-5 specific skills or tools. Be concrete. “Prompt engineering for customer response templates” beats “AI communication skills” every time.

Step 2: Design Your Survey (Keep It Under 10 Minutes)

People won’t complete a 40-question audit. They just won’t. I keep mine to 15 questions maximum.

Start with current usage: What AI tools do you use at work? How often? What tasks? Then move to confidence ratings on a simple 1-5 scale for the specific skills you identified in step one.

Here’s the question that always gives me gold: “What work task would you do differently if you knew how to use AI for it?” The answers tell you exactly where training will have immediate ROI.

Include a section on barriers. Is it lack of knowledge, lack of access, unclear policies, or just no time to learn? According to Jobs and Skills Australia, understanding adoption barriers is just as important as identifying skill gaps when planning workforce development.

Make it anonymous if you want honest answers about confidence levels. Make it mandatory if you want participation rates above 40%.

Step 3: Conduct Working Interviews with High Performers

Surveys tell you what people think they know. Watching them work tells you what they actually do.

Pick 5-8 people across different departments who are already using AI tools effectively. Spend 30 minutes with each, asking them to show you their workflow. You’ll discover tools and techniques your survey never captured.

I once found an entire customer service team using ChatGPT for translation because our official system was too slow. That discovery reshaped our entire training priority list.

These interviews also identify your future trainers. The finance officer who’s automated her monthly reports? She’s teaching that workshop in six weeks.

Step 4: Map Your Gaps and Prioritise

Now you’ve got data. Time to make sense of it.

Create a simple matrix: high importance vs. current capability for each skill area. Anything that’s high importance and low capability goes to the top of your training list.

But here’s where most L&D teams stumble - they try to fix everything at once. Don’t. Pick two, maybe three priority areas maximum for your first quarter.

When I was working with AI consultants Sydney on an enterprise-wide rollout, we started with just prompt writing for managers and basic AI literacy for everyone else. Once those stuck, we expanded. Trying to do it all simultaneously just creates training fatigue.

Step 5: Build Your Training Roadmap

Your roadmap needs four components:

Quick wins - Skills people can learn in under an hour that deliver immediate value (like using AI for meeting summaries)

Foundation training - Broader AI literacy for everyone, delivered in digestible modules

Role-specific deep dives - Targeted training for teams where AI can transform workflows

Ongoing support - Office hours, peer learning groups, updated resources as tools evolve

Set realistic timelines. Behaviour change doesn’t happen in a two-hour workshop. I typically plan 3-6 months for foundational shifts, with check-ins at 30, 60, and 90 days.

And please, measure something beyond completion rates. Track actual tool adoption, time saved, or process improvements. The Australian HR Institute has solid frameworks for measuring training effectiveness if you need a starting point.

The Real Success Metric

Three months after your audit, you should be able to walk up to any team member and ask, “How are you using AI in your work?” If they can give you a specific answer and show you an example, your audit worked.

If they look confused or say “not really,” you’ve got more work to do - but at least now you’ll know exactly where to focus.

Start small, measure what matters, and build from there. Your executive might want everyone trained by end of quarter, but you’ll deliver something better: people who actually know how to use AI to do their jobs better.