RTO Compliance Meets AI: What Training Providers Need to Know
The intersection of AI and Registered Training Organisation compliance is getting complicated. I’ve been fielding questions from RTO managers for months now: Can we use AI in assessments? What about AI-generated learning materials? How do we maintain quality when the technology changes weekly?
These aren’t hypothetical concerns. ASQA has started asking questions about AI use in audits, and many RTOs are scrambling to understand what compliant AI integration looks like.
Let me share what I’ve learned from working with training providers navigating this landscape.
The Compliance Framework Hasn’t Changed
Here’s the first thing to understand: the Standards for RTOs 2015 haven’t been rewritten for AI. The fundamental requirements remain the same:
- Training and assessment must meet the requirements of the training package
- Assessment must be valid, reliable, flexible, and fair
- Trainers and assessors must hold appropriate qualifications
- Students must demonstrate competency, not just complete activities
What’s changed is how we apply these principles when AI is part of the picture.
AI in Assessment: The Core Questions
The biggest compliance concern I hear about is assessment. Specifically: if a student uses AI to complete an assessment, is that cheating?
The answer depends on what you’re assessing.
If the unit requires the student to demonstrate writing skills, and they submit AI-generated text, that’s a validity issue. They haven’t demonstrated the competency. The assessment isn’t measuring what it’s supposed to measure.
But if the unit requires the student to demonstrate problem-solving in their field, and using AI tools is part of how professionals solve problems in that field, then prohibiting AI might actually undermine validity.
The key question is: what does competent performance look like in the real workplace? If professionals in that role use AI tools, your assessment should probably allow them too—while still ensuring students demonstrate the underlying competencies.
Designing AI-Appropriate Assessments
This means rethinking assessment design, not just adding AI policies.
Consider these approaches:
Oral questioning and demonstration. You can’t fake competency in a live conversation with an assessor. Supplement written assessments with verbal components that probe understanding.
Process documentation. Ask students to document their process, including how they used any tools. This lets you assess decision-making and critical evaluation, not just outputs.
Scenario-based assessment. Create scenarios that require judgment and adaptation. AI can help with component tasks, but the overall response requires human integration.
Workplace observation. Nothing beats watching someone perform tasks in context. If your assessment strategy relies heavily on this, AI concerns diminish.
AI-Generated Training Materials
The other big question is whether RTOs can use AI to create training materials.
Technically, there’s no regulation prohibiting this. The Standards require that materials support learners to meet competency requirements. They don’t specify how materials must be created.
But there are practical concerns:
Accuracy. AI can generate plausible-sounding content that’s factually wrong. Any AI-generated materials need rigorous review by qualified subject matter experts.
Currency. AI models have knowledge cutoffs. For fields where regulations or practices change frequently, AI-generated content may be outdated before it’s delivered.
Contextualisation. AI produces generic content. The contextualisation that makes training relevant to specific industries, regions, or learner groups still requires human expertise.
My recommendation: use AI as a drafting tool, not a finished product generator. AI can create first drafts that humans then review, revise, and validate. This saves time while maintaining quality.
Documentation and Evidence
Whatever approach you take to AI, document it.
Your Training and Assessment Strategy should address:
- How AI tools are integrated into training delivery
- Rules for AI use in assessments (permitted, prohibited, or conditional)
- How you ensure assessment validity when AI is involved
- Quality assurance processes for AI-generated materials
This documentation serves two purposes. It demonstrates to auditors that you’ve thought through the implications. And it provides clear guidance to trainers and students about expectations.
Training Your Trainers
Here’s a gap I see repeatedly: RTOs haven’t trained their trainers on AI.
If your trainers don’t understand how AI tools work, they can’t:
- Recognise AI-assisted submissions
- Teach students to use AI appropriately
- Design assessments that account for AI capabilities
- Model professional AI use in their field
This is a compliance issue. Standard 1.16 requires that training and assessment is delivered by trainers with current industry skills. In many industries, AI fluency is now a current industry skill.
Consider what professional development your training team needs.
The Student Perspective
Don’t forget that students have legitimate interests here too.
If they’re training for roles where AI tools are standard, they should learn to use those tools as part of their qualification. Prohibiting AI entirely might leave them unprepared for their workplaces.
On the other hand, they also need to develop foundational skills that don’t depend on AI. A balance is needed.
Be transparent with students about your approach. Explain the rationale, not just the rules. Students who understand why certain assessments restrict AI use are more likely to comply than students who feel arbitrarily constrained.
What Auditors Are Looking For
Based on recent audit activity, here’s what ASQA appears to be examining:
- Evidence that assessment tasks assess the actual competencies required
- Processes for detecting and responding to academic integrity issues
- Training and Assessment Strategies that reflect current delivery practices
- Evidence that trainers understand and can implement AI policies
The underlying concern is validity. Auditors want to know that students who receive qualifications have actually demonstrated competency. AI doesn’t change that fundamental expectation—it just adds complexity to how you demonstrate compliance.
Practical Steps for RTOs
If you haven’t already addressed AI in your compliance approach, start here:
-
Audit your current assessments. Which ones are vulnerable to AI completion in ways that undermine validity? Prioritise redesigning those.
-
Update your TAS. Document your approach to AI in training and assessment. Be specific about what’s permitted and what isn’t.
-
Train your team. Ensure trainers understand AI capabilities and your policies. Build this into your professional development requirements.
-
Communicate with students. Make expectations clear from enrolment through graduation.
-
Review and iterate. AI capabilities are changing rapidly. What works today may need adjustment in six months.
Looking Ahead
The regulatory environment will eventually catch up with the technology. ASQA and state regulators are watching how AI develops and will likely provide more explicit guidance over time.
Until then, the safest approach is to apply existing principles thoughtfully. The Standards have always required valid assessment of genuine competency. AI doesn’t change that requirement—it just requires us to think more carefully about how we meet it.
The RTOs that get this right will produce graduates who are both competent and AI-fluent. That’s good for students, good for employers, and good for the sector.