How to Structure a 90-Day AI Upskilling Program for Non-Technical Managers


Most AI training programmes I’ve seen make the same mistake: they try to turn managers into data scientists. That’s not the goal. The goal is to help managers make better decisions about when and how to use AI in their teams. Those are very different skill sets.

I’ve designed and run AI upskilling programmes for six organisations over the past two years. The framework below is what’s actually worked — refined through trial, error, and honest feedback from participants who weren’t shy about telling me what was a waste of their time.

Before you start: set expectations properly

The single biggest factor in programme success isn’t the curriculum. It’s expectation-setting.

Managers need to hear three things upfront:

  1. You don’t need to become technical. Nobody is going to ask you to code a model or explain how transformers work.
  2. You do need to become conversational. You should be able to have an informed discussion with technical teams about AI capabilities, limitations, and risks.
  3. This is about judgment, not knowledge. The valuable skill is knowing when AI is the right tool and when it isn’t.

If you skip this framing, you’ll lose half your cohort by week three because they think they’re falling behind.

Phase 1: Foundation (Weeks 1-3)

Week 1: What AI actually is (and isn’t)

Start by demolishing misconceptions. Most managers have a mental model of AI shaped by media hype. They either think it can do everything or they think it’s a fad. Neither is helpful.

I run a session I call “AI reality check” where we look at real case studies — successes and failures — from companies similar to theirs. No theory, no jargon. Just: here’s what Company X tried, here’s what happened, here’s why.

Pair this with hands-on time using generalist AI tools like ChatGPT, Claude, or Gemini. Let managers ask their own work questions and see what comes back. The “aha” moments in this session are consistent — people realise both how capable and how unreliable these tools can be.

Week 2: AI in your industry

This needs to be specific. Generic AI overviews are cheap content. What managers actually want to know is: what are my competitors doing? What are the proven use cases in my sector?

Commission or curate a briefing document specific to your industry. If you’re in financial services, pull from Deloitte’s financial services AI research. If you’re in manufacturing, look at the Industry 4.0 case studies. Make it relevant or people disengage.

Week 3: Data literacy basics

Managers don’t need to understand algorithms, but they absolutely need to understand data. Specifically: where does our data live? Is it clean? Is it sufficient? Is it biased?

I’ve found that a two-hour workshop on “reading” a dataset — understanding what fields mean, spotting obvious quality issues, recognising when a sample is too small — pays enormous dividends later when managers are evaluating AI proposals from vendors or internal teams.

Phase 2: Application (Weeks 4-7)

Week 4: Identifying AI opportunities in your team

This is where it gets practical. Each manager maps their team’s processes and identifies three to five candidates for AI assistance. Not automation — assistance. The distinction matters.

I provide a simple scoring matrix: potential impact, data availability, implementation complexity, and risk. Managers score their candidates and bring them to a group discussion. The peer feedback is consistently the most valued part of the entire programme.

Week 5: Evaluating AI tools and vendors

The AI vendor landscape is a mess. Every software company claims to have AI now. Managers need a framework for separating genuine capability from marketing.

I teach a straightforward evaluation checklist: What problem does this solve? What data does it need? What’s the evidence it works? What happens when it’s wrong? What does it cost — including hidden costs like integration, training, and ongoing maintenance?

We run a mock vendor evaluation using real pitch decks (anonymised) from AI companies. It’s revealing how quickly managers get better at spotting vague claims once they have a structured way to ask questions.

Week 6: Risk, ethics, and governance

This isn’t optional or nice-to-have. Managers who deploy AI without understanding the risk landscape are a liability.

Cover the essentials: bias and fairness, privacy and data protection, transparency and explainability, and the regulatory environment. The Australian Government’s AI Ethics Framework is a solid starting point for Australian organisations.

Keep it practical. Use scenario-based exercises: “Your team has built a model that predicts employee attrition. What could go wrong? Who needs to approve this? What would you check before deploying it?”

Week 7: Communicating about AI

Managers are the bridge between technical teams and senior leadership. They need to be able to translate both ways — explaining business needs to technical people and explaining technical constraints to the board.

Practice sessions work well here. Have managers present a one-page AI business case to a panel playing the role of sceptical executives. The feedback is always constructive and often funny.

Phase 3: Integration (Weeks 8-12)

Weeks 8-10: Live project

Each manager takes their best AI opportunity from Week 4 and develops a real pilot proposal. Not a hypothetical. A real thing they’ll actually implement.

They should define the problem, propose a solution, identify the data requirements, estimate costs and benefits, and outline the risks and mitigations. Technical mentors from IT or data teams should be available for consultation, but the manager owns the proposal.

Weeks 11-12: Present and refine

Managers present their proposals to peers and leadership. The best proposals get funded. Seriously — tie the programme to real budget allocation. Nothing motivates like genuine stakes.

The final session is a retrospective. What did participants learn? What’s still unclear? What do they need next? This feeds directly into your ongoing development plan.

What makes this work

Three things separate programmes that stick from programmes that fade:

Executive sponsorship. If the CEO or COO doesn’t visibly support this, managers will treat it as optional. It isn’t.

Peer learning. The group discussions and peer feedback are where the deepest learning happens. Don’t cut them to save time.

Ongoing reinforcement. Ninety days builds a foundation. You need monthly touchpoints afterward — a lunch-and-learn, a case study review, a guest speaker — to keep the momentum going.

AI literacy for managers isn’t a one-off training event. It’s an ongoing capability your organisation needs to build. But ninety days is enough to get a meaningful start.