Assessing Your Team's AI Skills Without Expensive Testing Platforms


Every L&D leader I talk to is trying to figure out the same thing: where does our workforce actually stand on AI skills? And most are getting quotes from assessment vendors that make their eyes water.

Here’s the thing—you can get useful baseline data without spending $50K on a testing platform. You just need to ask the right questions and watch the right behaviours.

Why Standard Assessments Often Mislead

Most AI skills assessments test what people know in theory. Can they define machine learning? Do they understand what a prompt is?

But theoretical knowledge doesn’t translate to workplace capability. I’ve seen people ace AI assessments and then struggle to write a useful prompt for a basic task. The gap between knowing about AI and knowing how to apply it is enormous.

A Practical Assessment Framework

Level 1: Basic Awareness

People at this level know AI exists and roughly what it does. They’ve probably tried ChatGPT once or twice but don’t use it regularly.

Assessment questions:

  • When was the last time you used an AI tool for work?
  • Can you name two AI tools relevant to your role?
  • What’s one task you think AI might help with?

Most employees are here. That’s fine—it’s a starting point, not a problem to fix.

Level 2: Functional Use

These people use AI tools regularly for specific tasks. They can get decent results but may not optimise their approach.

Observable behaviours:

  • Regularly uses AI for drafting, summarising, or research
  • Sometimes shares AI-generated outputs with colleagues
  • Has found at least one workflow where AI saves time

Assessment approach: Give them a specific work task and ask them to complete it with AI assistance while you observe. Not a test—just watch how they approach it.

Level 3: Proficient Application

Proficient users understand how to get good results consistently. They iterate on prompts, recognise AI limitations, and know when not to use AI.

Observable behaviours:

  • Customises AI usage for different task types
  • Reviews and significantly edits AI outputs before using them
  • Can explain why certain approaches work better than others

Assessment approach: Have them walk you through their process for a complex task. Ask “why did you do it that way?” and listen for nuanced answers.

Level 4: Strategic Integration

These are your AI champions—people who see opportunities others miss and help colleagues adopt AI effectively.

Observable behaviours:

  • Proactively identifies new AI use cases
  • Helps others troubleshoot AI problems
  • Understands organisational and ethical considerations
  • Can evaluate new AI tools objectively

You probably have fewer of these people than you think. That’s normal.

Running the Assessment

Phase 1: Self-Report Survey (Week 1)

Send a brief survey asking:

  1. How often do you use AI tools for work tasks? (Daily / Weekly / Monthly / Rarely / Never)
  2. What AI tools have you used in the past month?
  3. Describe one task where AI saved you time recently.
  4. What’s one thing you’d like to learn about AI?

Keep it to 5-7 questions maximum. Long surveys get abandoned.

Phase 2: Task Observation (Weeks 2-3)

Select a sample—maybe 15-20 people across different teams and levels. Ask them to complete a work-relevant task using AI while you observe. No judgement, just understanding.

Good tasks for observation:

  • Summarise this meeting transcript
  • Draft an email responding to this customer complaint
  • Research this topic and outline key points

Watch for: How do they start? Do they iterate? How do they evaluate the output?

Phase 3: Manager Conversations (Week 4)

Talk to managers about their team’s actual AI usage. Ask:

  • Who on your team uses AI most effectively?
  • Who seems hesitant or resistant?
  • What AI skills would make the biggest difference for your team?

Managers see daily reality that surveys miss.

What to Do With the Data

You’ll likely find:

  • Most people are at Level 1-2: This is typical. Don’t panic.
  • Pockets of advanced usage exist: Find these people—they’re your internal champions.
  • Some roles need AI skills urgently, others don’t: Prioritise accordingly.
  • Self-assessment often oversells: People think they’re more capable than observation reveals.

Building From Your Baseline

Once you know where people stand, you can design targeted interventions:

For Level 1 → Level 2: Focus on basic tool familiarity and finding one useful application. Hands-on workshops work better than training videos.

For Level 2 → Level 3: Teach prompting strategies and critical evaluation of outputs. Peer learning groups can be effective here.

For Level 3 → Level 4: Expose them to broader applications and give them opportunities to mentor others. Consider formal AI champion roles.

What This Approach Doesn’t Do

To be honest, this DIY assessment won’t give you:

  • Benchmarks against other organisations
  • Statistical validation of skill levels
  • Detailed competency mapping
  • Automated tracking over time

If you need those things for compliance, executive reporting, or large-scale workforce planning, you probably do need a proper platform. But for understanding where your team actually stands and designing effective training? This approach works.

Getting Started

You can run this assessment with existing resources in about four weeks. The main investment is time from your L&D team.

Start with one division or department. Learn what works, refine your approach, then scale.

You don’t need perfect data to start helping your workforce develop AI skills. You need good-enough data and the willingness to adjust as you learn.

Most organisations are still figuring this out. If you’re even asking the question, you’re ahead of many.