Building an AI Literacy Program From Scratch: A Practical Guide


Last year, a mid-sized professional services firm asked me to help them build an AI literacy program. They had no existing AI training, no dedicated resources, and a workforce with widely varying comfort levels with technology.

Starting from scratch is daunting but also liberating. You can design something coherent rather than patching together disparate pieces.

Here’s the approach we developed—one that’s since been adapted by several other organisations starting their AI literacy journeys.

Before You Build: Assessment

Don’t design a program in a vacuum. Understand your starting point.

Current State Assessment

What AI exposure does your workforce already have?

  • Who’s already using AI tools (officially or unofficially)?
  • What tools are in use?
  • What attitudes exist toward AI (enthusiasm, anxiety, indifference)?
  • What technical literacy levels exist across different groups?

This assessment shapes everything that follows. A workforce that’s already experimenting with ChatGPT needs different training than one that’s never touched AI tools.

Use Case Inventory

What AI applications are relevant to your organisation?

  • What tasks could AI assist with in different functions?
  • What tools are approved or planned for adoption?
  • What outcomes are you trying to achieve?

Generic AI literacy training falls flat. Relevant use cases make learning sticky.

Constraint Mapping

What constraints will shape your program?

  • Budget available
  • Time people can dedicate to learning
  • Technical infrastructure
  • Geographic distribution of workforce
  • Existing learning platforms and approaches

Work within your constraints rather than designing an ideal program you can’t execute.

The Four Levels of AI Literacy

AI literacy isn’t binary. I think of it in four levels, each building on the previous:

Level 1: AI Awareness

Understanding what AI is, what it can and can’t do, and how it’s changing work. This level is about mental models, not tool proficiency.

Everyone needs Level 1 literacy. It’s the foundation for everything else.

Level 2: AI User

Ability to use AI tools effectively for personal productivity—generating content, analysing information, automating routine tasks.

Most knowledge workers need Level 2 literacy to remain effective in evolving roles.

Level 3: AI Integrator

Ability to integrate AI into team and organisational workflows—identifying use cases, designing processes, managing implementation.

Team leaders and change champions need Level 3 literacy to drive adoption.

Level 4: AI Strategist

Ability to make strategic decisions about AI—evaluating tools, managing risks, planning transformation.

Leaders and specialists need Level 4 literacy to guide organisational direction.

Different roles need different levels. Your program should address all relevant levels, not just one.

Program Architecture

A coherent program needs structure. Here’s an architecture that works:

Foundation Module (Everyone)

A core module that everyone completes, establishing shared understanding:

  • What AI is (and isn’t)
  • Current AI capabilities and limitations
  • How AI is changing work broadly
  • Your organisation’s AI approach and expectations
  • Ethical considerations and policies

Duration: 2-4 hours Format: Can be asynchronous for flexibility

Role-Specific Pathways

After the foundation, pathways diverge based on role needs:

Individual Contributor Pathway: Focus on personal productivity—prompt engineering, content generation, analysis tasks, workflow integration.

Manager Pathway: Focus on team enablement—supporting team learning, identifying use cases, managing adoption, measuring impact.

Leader Pathway: Focus on strategy—evaluating tools, managing risk, planning transformation, communicating direction.

Each pathway might be 4-8 hours of structured learning plus ongoing practice.

Just-in-Time Resources

Not everything can be predicted. Provide resources people can access when needs arise:

  • Tool-specific tutorials
  • Prompt libraries
  • Use case examples
  • Troubleshooting guides
  • FAQ databases

These resources complement structured learning without replacing it.

Ongoing Learning

AI evolves rapidly. One-time training becomes obsolete quickly. Build in ongoing learning:

  • Regular updates on new capabilities
  • Advanced skill sessions
  • Community forums for sharing experiences
  • External resources for self-directed learning

Content Development Approach

You don’t have to create everything from scratch. Smart content development balances:

Buy

External content can accelerate development:

External content works well for general AI concepts but often lacks organisation-specific relevance.

Build

Some content must be developed internally:

  • Organisation-specific policies and expectations
  • Relevant use cases for your context
  • Tool configurations specific to your environment
  • Integration with existing processes

Build what only you can build.

Curate

Often the best approach is curation—selecting and organising existing resources:

  • Recommended external courses
  • Curated article collections
  • Selected YouTube tutorials
  • Vetted prompt libraries

Curation adds value without development cost.

Delivery Mechanisms

How you deliver matters as much as what you deliver:

Synchronous vs. Asynchronous

Synchronous (live) works best for:

  • Building community and peer connection
  • Discussing complex or sensitive topics
  • Hands-on practice with facilitation
  • Q&A and troubleshooting

Asynchronous (self-paced) works best for:

  • Foundational knowledge transfer
  • Individual skill practice
  • Accommodating diverse schedules
  • Scaling to large populations

Most programs benefit from both.

Digital vs. In-Person

Digital enables scale and flexibility. In-person enables deeper engagement and connection. Hybrid approaches often work best—digital for content delivery, in-person for practice and discussion.

Learning in the Flow of Work

The most effective learning happens in context. Look for ways to embed learning in actual work:

  • AI coaching when using tools
  • Just-in-time tutorials
  • Peer support for real tasks
  • Application exercises using real work

Implementation Sequence

Roll out deliberately rather than all at once:

Phase 1: Pilot

Start with a contained pilot—one team, one function, or one location. This allows you to:

  • Test content and approaches
  • Identify problems before scale
  • Build success stories
  • Develop internal champions

Choose pilot groups carefully. You want enough enthusiasm to succeed but enough scepticism to surface real issues.

Phase 2: Champions Expansion

Train a broader group of champions who can support adoption in their areas. This multiplies your capacity to support learning.

Phase 3: Broad Rollout

With refined content and champion support in place, roll out to the broader organisation. Sequence by priority or readiness.

Phase 4: Continuous Improvement

Gather feedback. Measure outcomes. Update content. Evolve the program based on experience.

Supporting Infrastructure

Content alone isn’t enough. Build supporting infrastructure:

Policy and Governance

Clear policies about AI use—what’s allowed, what’s not, what requires approval. People can’t use AI confidently if they’re unsure about rules.

Technology Access

Ensure people have access to the tools they’re learning. Training on tools people can’t actually use is frustrating and pointless.

Support Channels

Ways for people to get help—help desks, chat channels, office hours, peer networks. Learning continues long after formal training ends.

Recognition Systems

Ways to recognise AI learning and adoption. What gets recognised gets prioritised.

Measuring Success

How will you know if your program is working?

Learning Metrics

  • Completion rates
  • Assessment scores
  • Confidence measures

These tell you about the program but not necessarily about impact.

Adoption Metrics

  • Tool usage rates
  • Feature utilisation
  • Use case implementation

These tell you whether learning translates to behaviour.

Impact Metrics

  • Productivity improvements
  • Quality improvements
  • Time savings
  • Employee satisfaction

These tell you whether adoption creates value.

Track all three levels to understand the full picture.

Common Pitfalls

Having built several programs, I’ve seen what goes wrong:

One-and-done thinking. AI literacy isn’t a box to check. It’s an ongoing capability to develop.

Generic content only. People need to see relevance to their specific work. Generic AI concepts don’t drive adoption.

Insufficient practice time. Knowledge without practice doesn’t create capability. Build practice into your program.

Ignoring anxiety. AI triggers real concerns. Address them directly rather than hoping enthusiasm will overwhelm fear.

Moving too fast. The goal is sustainable capability, not rapid coverage. Better to go deeper with fewer people than superficially with everyone.

Getting Started

If you’re starting from scratch, here’s a practical first-step sequence:

  1. Assess current state and constraints (2-3 weeks)
  2. Define target levels by role (1 week)
  3. Design program architecture (2-3 weeks)
  4. Develop/curate foundation module (3-4 weeks)
  5. Run pilot (4-6 weeks)
  6. Refine based on feedback (2-3 weeks)
  7. Begin broader rollout (ongoing)

This timeline assumes dedicated focus. Adjust based on your reality.

The Investment Is Worth It

Building an AI literacy program from scratch requires significant investment—time, money, attention. But the alternative is worse: a workforce unprepared for AI-transformed work, adoption left to chance, competitive disadvantage as others move ahead.

The organisations that build these capabilities now will have advantages for years to come.

Start building.