Creating an AI Policy for Your Workplace


A friend recently told me her company has a strict no-AI policy. “But everyone uses it anyway,” she added. “We just don’t mention it.”

This is the worst of all outcomes: employees using AI without guidance, no support for using it well, risks unmanaged, and a culture of hiding rather than learning.

If you haven’t developed an AI policy for your workplace, your employees have developed one for themselves. It’s probably inconsistent, occasionally risky, and missing the productivity opportunities that thoughtful adoption would capture.

Here’s how to create a policy that actually works.

Why Policy Matters

AI policies serve multiple purposes:

Risk management. AI tools create genuine risks around data privacy, confidentiality, accuracy, and intellectual property. Policy establishes guardrails.

Clarity for employees. Without guidance, people either avoid AI entirely (missing productivity gains) or use it without appropriate caution. Policy provides direction.

Consistency. Different teams making different decisions creates operational complexity and equity issues. Policy establishes baseline expectations.

Legal and regulatory foundation. In regulated industries, demonstrable governance of AI use may be legally required. Policy documents that governance. AHRI has published guidance on workplace AI policy development that can serve as a useful reference for Australian organisations.

Cultural signal. A thoughtful policy signals that the organisation takes AI seriously and wants to help people use it well.

The Policy Development Process

Step 1: Understand Current State

Before writing policy, understand what’s actually happening:

  • What AI tools are people using?
  • For what purposes?
  • What data is being input?
  • What risks have emerged?
  • What benefits have been realised?

Anonymous surveys and focus groups can surface honest information that direct questions might not.

Step 2: Identify Stakeholders

AI policy affects multiple constituencies:

  • Line employees using AI for daily work
  • Managers overseeing AI use
  • IT managing technology and security
  • Legal concerned with liability and compliance
  • HR addressing employment implications
  • Privacy teams managing data protection
  • Compliance ensuring regulatory adherence

Involve relevant stakeholders in development. Policies developed in silos often miss critical perspectives.

Step 3: Categorise Use Cases

Different AI applications require different treatment. Useful categories:

Fully permitted: Low-risk applications with clear benefit. E.g., grammar checking, scheduling assistance, general research.

Permitted with guidelines: Moderate-risk applications that require appropriate caution. E.g., content drafting, analysis assistance, customer communication drafts.

Restricted: Higher-risk applications requiring specific approval. E.g., using AI with sensitive data, customer-facing AI responses, automated decisions.

Prohibited: Applications too risky to allow. E.g., inputting confidential client data, submitting AI work without review, using AI for regulated decisions.

Step 4: Address Key Issues

Any comprehensive policy should address:

Data and confidentiality.

  • What data can be input to AI systems?
  • What data is prohibited?
  • How should sensitive information be handled?
  • What are the rules for different data classifications?

Output quality and verification.

  • Who is responsible for AI output accuracy?
  • What verification processes are required?
  • How should AI limitations be communicated?

Disclosure and transparency.

  • When must AI use be disclosed?
  • To customers? To colleagues? To managers?
  • How should AI-assisted work be attributed?

Approved tools.

  • Which AI tools are approved for use?
  • What’s the process for requesting new tools?
  • Are personal AI accounts permitted for work tasks?

Intellectual property.

  • Who owns AI-generated content?
  • What about content generated using company data?
  • How do IP considerations affect what can be input?

Employment and HR implications.

  • How does AI use affect performance expectations?
  • What training and support is available?
  • How are AI-related skills factored into career development?

Step 5: Draft Clear Guidelines

Write guidelines that people can actually follow:

  • Use plain language, not legal jargon
  • Provide concrete examples
  • Distinguish between rules and suggestions
  • Explain rationale so people can apply judgment to new situations
  • Keep it as short as possible while covering essentials

Step 6: Build in Flexibility

AI capabilities and risks are evolving rapidly. Policy that’s too rigid becomes obsolete quickly.

  • Include review cycles (quarterly during rapid change)
  • Create processes for exceptions and emerging situations
  • Allow for experimentation within boundaries
  • Distinguish principles (stable) from specific guidelines (may change)

Step 7: Plan for Implementation

Policy is meaningless if not implemented:

  • How will the policy be communicated?
  • What training supports understanding?
  • How will compliance be monitored?
  • What happens when policy is violated?

Implementation deserves as much attention as policy content.

Common Policy Mistakes

Too Restrictive

Policies that prohibit all AI use drive it underground. Employees who could benefit from AI assistance hide their use rather than engage productively.

Better approach: permit appropriate use with guidelines, restrict only what’s genuinely risky.

Too Vague

“Use AI responsibly” isn’t a policy. People need specific guidance on what’s permitted and what isn’t.

Better approach: concrete guidelines with examples, even if comprehensive coverage isn’t possible.

Disconnected from Reality

Policies written by people who don’t understand how AI is actually used often miss the point. Consulting only lawyers and IT yields compliance-focused policies that don’t address real workflow questions.

Better approach: involve people who actually use AI in their work.

Ignoring the Training Gap

Policy tells people what to do. Training develops ability to do it. Policy without training assumes capability that may not exist.

Better approach: pair policy with capability development programs.

Set and Forget

AI changes monthly. Policy that made sense six months ago may be obsolete today. Annual review cycles are too slow.

Better approach: regular review and update cycles, with mechanisms for addressing emerging issues.

Sample Policy Elements

Here’s a framework for structuring policy content:

Purpose and Scope

  • What this policy covers
  • Who it applies to
  • Connection to organisational values and strategy

Approved Tools

  • List of permitted AI tools
  • Process for requesting new tools
  • Rules about personal accounts

Permitted Uses

  • Categories of permitted use
  • Examples within each category
  • Relevant guidelines for each

Restricted and Prohibited Uses

  • What requires specific approval
  • What is never permitted
  • How to handle grey areas

Data Protection

  • Data classification rules
  • What can and can’t be input
  • Handling of outputs containing sensitive information

Quality and Verification

  • Responsibility for output accuracy
  • Required verification processes
  • Documentation requirements

Disclosure Requirements

  • When AI use must be disclosed
  • How to disclose appropriately
  • Customer and stakeholder communication

Compliance and Enforcement

  • Monitoring approach
  • Consequences of violations
  • Exception request process

Training and Support

  • Available training resources
  • Where to get help
  • Ongoing development expectations

Review and Updates

  • Review schedule
  • Feedback mechanisms
  • How changes will be communicated

Communicating the Policy

How you communicate policy shapes whether it’s followed:

Position positively. “Here’s how to use AI productively and safely” beats “Here’s what you can’t do.” Lead with enablement, not restriction.

Provide training. Don’t just distribute a document. Help people understand what it means for their work.

Invite questions. Create channels for people to ask about situations not explicitly covered.

Lead by example. If leadership uses AI appropriately and openly, others will follow.

Acknowledge imperfection. The policy won’t cover everything. Encourage good judgment and questions for unclear situations.

The Ongoing Journey

An AI policy isn’t a destination—it’s a starting point. You’ll learn from implementation. Edge cases will emerge. Technology will change. Employee feedback will highlight gaps.

Build in mechanisms for continuous improvement:

  • Regular pulse checks on how policy is working
  • Collection of questions and edge cases
  • Periodic review and revision
  • Monitoring of external developments

The organisations that get AI governance right won’t be those with perfect initial policies. They’ll be those that learn and adapt as the landscape evolves.

That’s the real goal: a governance approach that keeps pace with change while managing risk and enabling value.