Rethinking Your Skills Taxonomy for the AI Era


Last month, I reviewed a skills taxonomy from a major organisation. Under “Research Skills,” the competencies listed were: “Identify relevant sources,” “Evaluate source credibility,” “Synthesise information,” and “Present findings.”

Every single one of those competencies is transformed by AI. The underlying skills needed to do research with AI assistance are fundamentally different from research without it.

Yet the taxonomy hadn’t been updated since 2019.

This organisation isn’t unusual. Most skills frameworks and taxonomies were built before AI became central to work. They’re increasingly misaligned with actual capability requirements.

It’s time to rethink.

Why Taxonomies Matter

Skills taxonomies might seem like bureaucratic artifacts, but they drive critical processes. LinkedIn Learning’s skills reports and Jobs and Skills Australia data both emphasise how rapidly skill requirements are evolving:

  • Hiring: What competencies do we assess in candidates?
  • Performance: What capabilities do we evaluate?
  • Development: What skills do we invest in building?
  • Workforce planning: What capabilities do we need for the future?
  • Compensation: What skills are valued and rewarded?

If your taxonomy doesn’t reflect AI-era skill requirements, all these processes are misaligned.

How AI Changes Skill Requirements

AI doesn’t just add new skills—it transforms existing ones:

Tasks Move From Doing to Directing

Consider writing a market analysis. Before AI:

Skills needed: Research methods, data interpretation, analytical writing, formatting, editing.

With AI:

Skills needed: Defining the question well, directing AI research, evaluating AI outputs, integrating sources critically, refining and editing AI drafts, exercising judgment about conclusions.

The skill isn’t “write market analysis.” It’s “direct and refine AI-assisted market analysis.”

Quality Assurance Becomes Essential

AI produces confident-sounding output of varying quality. The ability to evaluate that output becomes critical.

For every AI-assisted task, add:

  • Critical evaluation of AI outputs
  • Fact-checking and verification
  • Bias detection
  • Judgment about appropriateness

These weren’t explicit skills before. Now they’re essential.

Meta-Skills Gain Importance

Working effectively with AI requires meta-skills that cut across domains:

  • Effective prompting and direction-giving
  • Iterative refinement processes
  • Knowing when AI helps vs. hinders
  • Maintaining human judgment in AI-assisted workflows

These meta-skills aren’t function-specific but apply everywhere.

Some Traditional Skills Remain Essential

Not everything changes. Some skills remain critical:

  • Complex human judgment
  • Relationship building and influence
  • Creative vision (not just creative production)
  • Ethical reasoning
  • Leadership and management
  • Physical skills and presence

A good taxonomy distinguishes skills AI transforms from skills that remain fundamentally human.

Framework for Taxonomy Revision

Here’s an approach to updating your skills framework:

Step 1: Audit Current Taxonomy

Review every skill category and competency:

  • How is AI changing how this skill is applied?
  • What new sub-skills are needed for AI-augmented work?
  • Which aspects become more important? Less important?
  • What entirely new skills should be added?

This audit should involve people doing the work, not just HR or L&D professionals.

Step 2: Identify AI Meta-Skills

Add a layer of AI-specific meta-skills that apply across roles:

AI Tool Proficiency: Ability to effectively use AI tools relevant to one’s work.

Prompt Engineering: Ability to provide clear, effective direction to AI systems.

AI Output Evaluation: Ability to critically assess AI-generated content for accuracy, appropriateness, and quality.

AI Workflow Integration: Ability to integrate AI tools effectively into work processes.

AI Ethics and Governance: Understanding of appropriate AI use, including limitations, biases, and ethical considerations.

These meta-skills should be expected across knowledge work roles.

Step 3: Update Domain Skills

For each domain (writing, analysis, research, design, etc.), update the competencies:

Before: “Write clear business documents”

After: “Write and refine business documents, including effective use of AI writing assistance, critical editing of AI-generated content, and application of human judgment to voice and appropriateness”

Make the AI-augmented nature of work explicit in skill definitions.

Step 4: Add Verification Skills

Across domains, add verification-related competencies:

  • Fact-checking AI-generated content
  • Evaluating AI recommendations for bias and accuracy
  • Verifying AI outputs against source materials
  • Detecting AI hallucinations and errors

These skills weren’t needed when humans did all the cognitive work.

Step 5: Adjust Skill Levels

Reconsider what constitutes different proficiency levels:

Foundation level: Can use AI tools with guidance, understands when to verify outputs, knows when to ask for help.

Intermediate level: Uses AI independently for routine tasks, consistently verifies outputs, identifies when AI is appropriate vs. inappropriate.

Advanced level: Maximises AI value across complex tasks, develops sophisticated prompting approaches, coaches others, identifies new AI applications.

Level definitions should reflect AI-augmented work.

Step 6: Validate With Practitioners

Before finalising, validate updated taxonomies with people doing the work:

  • Do these competencies reflect what’s actually needed?
  • Are the level definitions accurate?
  • What’s missing?

People closest to the work often see what frameworks miss.

Challenges You’ll Face

Taxonomy revision isn’t straightforward:

Rapid Evolution

AI capabilities change quickly. A taxonomy that’s accurate today may be outdated in months.

Build in regular review cycles. Accept that the taxonomy will need ongoing updates.

Resistance to Change

Existing taxonomies are embedded in systems and processes. Changing them affects compensation structures, evaluation processes, hiring practices.

Build stakeholder engagement before changing. Explain why updates are necessary.

Measurement Challenges

Some new skills (like AI judgment and verification) are harder to measure than traditional skills.

Develop new assessment approaches alongside new skill definitions.

Over-Specificity vs. Over-Abstraction

Too specific, and taxonomies become obsolete quickly. Too abstract, and they don’t guide practical decisions.

Find the middle ground that provides useful guidance while accommodating evolution.

Using the Updated Taxonomy

An updated taxonomy enables better practices:

Better Hiring

Assess candidates on AI-relevant competencies:

  • Can they work effectively with AI tools?
  • Can they evaluate AI outputs critically?
  • Do they understand appropriate AI use?

Don’t just ask if they’ve used AI. Assess actual competence.

Better Development

Focus development on AI-era requirements:

  • AI tool proficiency building
  • Critical evaluation skill development
  • Integration of AI into existing skill areas

Align L&D investment with updated competency requirements.

Better Performance Management

Evaluate performance on updated expectations:

  • Effective AI tool utilisation
  • Quality of AI-augmented outputs
  • Appropriate judgment in AI use

Performance conversations should address AI-relevant skills.

Better Workforce Planning

Understand AI-era capability gaps:

  • Where are we strong in AI meta-skills?
  • Where do we need development?
  • What capabilities should we build vs. hire?

Planning requires accurate understanding of requirements and current state.

The Ongoing Journey

Taxonomy revision isn’t a one-time project. It’s an ongoing process:

  • Regular reviews as AI capabilities evolve
  • Continuous input from people doing the work
  • Adjustment as you learn what actually predicts success
  • Integration with broader talent management evolution

Build the capability to maintain relevant taxonomies, not just the taxonomy itself.

Starting Now

If you haven’t updated your skills taxonomy for AI, start now:

  1. Review one high-AI-impact domain as a pilot
  2. Engage practitioners in identifying changes
  3. Develop updated competency definitions
  4. Test through hiring or development activities
  5. Refine based on experience
  6. Expand to other domains

Waiting until you can do comprehensive revision means waiting while your frameworks grow increasingly irrelevant.

Start with what you can. Build from there.

Your taxonomy is a foundation for how you develop, evaluate, and deploy human capability. Make sure that foundation reflects the actual capabilities that matter in AI-augmented work.