Policy
Staff AI Acceptable Use Policy
A detailed acceptable use policy for staff covering the dos and don'ts of AI tool usage, with sign-off section. Suitable for organisations that already have a broader AI strategy and need a staff-facing document.
Updated 10 April 2026
Who is this for?
HR departments, IT teams, and line managers who need a staff-facing document that clearly sets behavioural expectations around AI tool use.
When to use it
When onboarding new staff who will have access to AI tools, or when rolling out a new AI tool across the organisation and a signed acknowledgement is required.
Template
# Staff AI Acceptable Use Policy **Organisation:** ___________________________ **Issued by:** ___________________________ **Effective date:** ___________________________ **Review date:** ___________________________ --- ## About This Policy This document tells you, as a member of staff, what you are and are not permitted to do when using AI tools as part of your work. Please read it carefully. A sign-off section is included at the end. If you have questions, speak to your line manager or contact [Name/Email]. --- ## What We Mean by "AI Tools" For the purposes of this policy, AI tools include any application, platform, or feature that uses artificial intelligence to generate text, images, code, audio, summaries, or decisions. This includes but is not limited to: - Large language model chatbots (e.g. ChatGPT, Claude, Gemini, Copilot) - AI writing assistants built into office software - AI image generators - AI transcription or summarisation tools - Automated decision-support tools --- ## What You CAN Do You are permitted to use approved AI tools to: - [ ] Draft internal communications, reports, or documents (subject to human review) - [ ] Summarise meeting notes or lengthy documents - [ ] Brainstorm ideas, structures, or approaches to a task - [ ] Research general topics or create first-draft content - [ ] Write, review, or comment on code (subject to technical review) - [ ] Translate or rephrase text for clarity - [ ] Create presentation outlines or slide content You must always review, verify, and take responsibility for any AI-generated output before using it. --- ## What You CANNOT Do You must NOT use AI tools to: - [ ] Enter the personal data of clients, service users, employees, or members of the public - [ ] Process sensitive categories of data (health, race, religion, finances, etc.) without explicit written approval - [ ] Submit AI-generated content to external parties without disclosing AI involvement where it is material - [ ] Publish AI-generated content on the organisation's external channels without editorial sign-off - [ ] Make or implement decisions that significantly affect individuals (e.g. recruitment, disciplinary, financial) using AI outputs alone - [ ] Use unapproved AI tools for any work-related task - [ ] Attempt to bypass content filters or safety features of AI tools - [ ] Use AI to impersonate colleagues, clients, or other individuals --- ## Data Protection Rules 1. Before entering any information into an AI tool, ask yourself: "Could this identify a person?" 2. If yes, do not enter it unless the tool is explicitly approved for personal data and documented as such. 3. Remove or anonymise names, addresses, NI numbers, case references, and other identifiers before using AI tools for drafting. 4. Be aware that many AI tools store conversation history and may use it to train future models. If in doubt, assume your input is not private. 5. If you believe a data incident has occurred (e.g. you accidentally entered personal data), report it immediately to [Name/Email]. --- ## Accuracy and Responsibility - AI tools can produce confident-sounding but incorrect information. This is known as "hallucination." - You are personally responsible for verifying the accuracy of any AI-assisted work you submit. - Do not use AI outputs as the sole basis for legal, medical, financial, or safety-critical decisions. - If you are unsure whether an AI output is accurate, seek a second source or ask a subject matter expert. --- ## Approved Tools The following AI tools are currently approved for staff use: | Tool | Approved For | Data Restrictions | Notes | |------|-------------|-------------------|-------| | | | | | | | | | | For an up-to-date list or to request a new tool, contact [Name/Email]. --- ## Reporting and Escalation Report the following immediately to [Name/Email]: - Accidental entry of personal or sensitive data into an AI tool - Suspected misuse of AI tools by a colleague - AI-generated output that is discriminatory, harmful, or potentially illegal - Security concerns related to AI tool use --- ## Training All staff using AI tools must complete [Name of Training Course] before use. Training will be made available via [Platform/Link]. Completion is logged by [Name/Team]. --- ## Consequences of Breach Breaches of this policy may result in: - Removal of access to AI tools - Disciplinary action, up to and including dismissal - Referral to the ICO or other regulatory bodies where data protection obligations are breached --- ## Staff Acknowledgement By signing below, I confirm that I have read, understood, and agree to comply with the Staff AI Acceptable Use Policy. **Name (print):** ___________________________ **Job title:** ___________________________ **Department:** ___________________________ **Signature:** ___________________________ **Date:** ___________________________ Return this signed copy to: ___________________________
This template should be adapted to reflect your organisation's specific approved tools, data handling obligations, and legal context. Review with your HR and legal advisors before issue.