Policy
Basic Workplace AI Policy
A straightforward AI policy template suitable for small to medium organisations adopting AI tools for the first time. Covers acceptable use, data handling, and staff responsibilities.
Updated 10 April 2026
Who is this for?
School leaders, HR managers, office managers, and team leads who need a simple written AI policy to give staff clarity on how AI tools may be used at work.
When to use it
When your organisation is beginning to use AI tools such as ChatGPT, Copilot, or Gemini and needs a written policy in place before wider roll-out.
Template
# Workplace AI Policy
**Organisation name:** ___________________________
**Policy owner:** ___________________________
**Date adopted:** ___________________________
**Review date:** ___________________________
**Version:** 1.0
---
## 1. Purpose
This policy sets out how staff at [Organisation Name] may use artificial intelligence (AI) tools in the course of their work. It aims to support responsible, safe, and productive use of AI while protecting the organisation, its clients, and its data.
---
## 2. Scope
This policy applies to:
- All employees, contractors, and volunteers
- Any AI tool used on organisation devices, networks, or accounts
- AI tools used for work purposes on personal devices
---
## 3. Approved AI Tools
Staff may only use AI tools that have been approved by [Name/Role]. As of the date of this policy, the following tools are approved:
| Tool | Approved uses | Restrictions |
|------|--------------|--------------|
| [e.g. Microsoft Copilot] | Drafting emails, summarising documents | No client data |
| [e.g. ChatGPT (free)] | Research, brainstorming | No confidential data |
| | | |
To request approval for a new AI tool, complete the AI Tool Request Form and submit to [Name/Email].
---
## 4. Acceptable Use
Staff may use approved AI tools to:
- Draft, edit, or summarise text
- Brainstorm ideas or structures
- Research general topics
- Create training materials or internal documents
- Generate code or scripts (subject to review)
---
## 5. Prohibited Uses
Staff must NOT use AI tools to:
- Process personal data (names, addresses, NI numbers, medical data) without explicit approval
- Handle commercially sensitive or confidential information unless the tool is approved for that purpose
- Generate content that will be published externally without human review
- Make decisions that affect individuals (e.g. HR decisions, benefit assessments) without human oversight
- Circumvent existing policies, procedures, or legal obligations
- Misrepresent AI-generated content as entirely human-authored in formal or legal contexts
---
## 6. Data Protection
- Do not enter personal data into AI tools unless the tool is GDPR-compliant and has been approved for that purpose
- Check whether the AI tool stores or trains on your inputs — if it does, treat it as a public channel
- When in doubt, anonymise or remove identifying details before using an AI tool
- All AI use involving personal data must comply with our Data Protection Policy and the UK GDPR
---
## 7. Quality and Accuracy
- AI outputs must always be reviewed by a member of staff before use
- Staff are responsible for the accuracy of any AI-assisted work they submit or share
- Do not rely on AI for factual, legal, medical, or financial advice without independent verification
- Be aware that AI tools can produce plausible-sounding but incorrect information ("hallucinations")
---
## 8. Transparency
- When submitting work to external parties, disclose AI assistance where it is material to the content
- Do not present AI-generated work as entirely original in academic, legal, or formal contexts
- Internal documents may note AI assistance in a footer or header where appropriate
---
## 9. Staff Responsibilities
All staff must:
- Read and follow this policy
- Complete any required AI literacy training
- Report misuse or data incidents involving AI tools to [Name/Email] immediately
- Keep their knowledge of AI risks up to date
---
## 10. Manager Responsibilities
Managers must:
- Ensure their team is aware of this policy
- Not pressure staff to use AI tools in ways that breach this policy
- Escalate requests for new tools to the appropriate approver
- Support staff in raising concerns about AI use
---
## 11. Breaches
Breaches of this policy may result in disciplinary action. Significant breaches involving personal data will be reported to the ICO as required under UK GDPR.
---
## 12. Review
This policy will be reviewed annually or when significant changes occur in AI technology or legislation. The policy owner is responsible for scheduling reviews.
---
**Approved by:** ___________________________
**Signature:** ___________________________
**Date:** ___________________________This template is for educational and guidance purposes only. It should be reviewed and adapted by your legal or HR team before adoption as an official organisational policy.