AI Skills

AI Risk Assessment Starter Sheet

A structured risk assessment template for evaluating the risks of introducing a new AI tool or AI-assisted process. Covers data, accuracy, bias, security, and operational risks.

Updated 10 April 2026

Who is this for?

Data protection officers, IT managers, risk leads, and department heads who need to assess the risks of a proposed AI tool before approving its use.

When to use it

Before approving any new AI tool for use, or when an existing AI tool is being used in a new way that may create additional risk. Also useful as a precursor to a full DPIA.

Template

# AI Risk Assessment Starter Sheet

**Organisation:** ___________________________
**Tool / process being assessed:** ___________________________
**Proposed use:** ___________________________
**Assessor name and role:** ___________________________
**Date of assessment:** ___________________________
**Review date:** ___________________________

---

## Part 1: DPIA Screening

Answer the following questions to determine whether a full Data Protection Impact Assessment (DPIA) is required.

| Question | Yes | No | Notes |
|---------|-----|-----|-------|
| Will the tool process personal data? | | | |
| Will it use systematic or automated profiling of individuals? | | | |
| Will it process special category data (health, race, religion, etc.)? | | | |
| Will it make decisions with legal or similarly significant effects on people? | | | |
| Will it monitor individuals (employees, clients, etc.) at scale? | | | |
| Will it combine datasets in ways individuals would not expect? | | | |
| Does it involve new or innovative uses of AI with unclear risks? | | | |
| Could a failure cause significant harm or distress to individuals? | | | |

**If you answered YES to any of the above, a full DPIA is required before proceeding.**

DPIA required: Yes / No
DPIA reference (if completed): ___________________________
DPO sign-off obtained: Yes / No / Not required

---

## Part 2: Data Risk Assessment

### 2.1 What data will be entered into this tool?

Describe the types of data that users may input (even if the intended use does not involve personal data, consider what staff might realistically enter):

___________________________

### 2.2 Data Storage and Processing

| Question | Response |
|---------|----------|
| Where does the tool store data? (UK / EU / US / Other) | |
| How long does the tool retain inputs? | |
| Does the vendor use inputs to train their models? | Yes / No / Unknown |
| Is a Data Processing Agreement available? | Yes / No |
| Is the tool certified to an acceptable security standard? | Yes (specify) / No / Unknown |

### 2.3 Data Risk Rating

| Risk | Likelihood (H/M/L) | Impact (H/M/L) | Risk level | Mitigation |
|------|-------------------|----------------|-----------|-----------|
| Personal data entered accidentally | | | | Restrict use / user training |
| Data stored in non-compliant jurisdiction | | | | Verify DPA / reject tool |
| Vendor uses data to train AI models | | | | Opt out / disable / reject |
| Data breach at vendor side | | | | Review vendor security posture |
| Excessive data retention by vendor | | | | Check retention policy |

---

## Part 3: Accuracy and Reliability Risk

| Risk | Likelihood (H/M/L) | Impact (H/M/L) | Risk level | Mitigation |
|------|-------------------|----------------|-----------|-----------|
| AI produces factually incorrect outputs (hallucination) | | | | Mandatory human review of all outputs |
| AI output used without verification | | | | Training / supervision |
| Tool gives out-of-date information | | | | Training cutoff date check |
| Errors in AI output cause reputational or legal harm | | | | Sign-off process for external use |
| Staff over-rely on AI and lose critical thinking skills | | | | Policy limits / refresher training |

**Minimum review controls to put in place:**
- [ ] All AI-generated outputs reviewed by a qualified member of staff before use
- [ ] External publications: editorial sign-off required
- [ ] Legal / financial / medical content: subject matter expert review required
- [ ] AI use noted in document metadata where appropriate

---

## Part 4: Bias and Fairness Risk

| Question | Response | Action |
|---------|----------|--------|
| Could this tool's outputs reflect bias against any protected group? | Yes / No / Unknown | |
| Has the tool been evaluated for fairness by the vendor? | Yes / No / Unknown | |
| Will this tool assist in decisions about people (hiring, benefits, etc.)? | Yes / No | |
| Is there a human in the loop for all consequential decisions? | Yes / No | |

**Bias risks identified:**
___________________________

**Mitigations:**
___________________________

---

## Part 5: Security Risk

| Risk | Likelihood (H/M/L) | Impact (H/M/L) | Risk level | Mitigation |
|------|-------------------|----------------|-----------|-----------|
| Unauthorised access to AI tool account | | | | MFA, strong passwords, access controls |
| Sensitive data exposed via public AI tool | | | | Block personal data entry / training |
| AI tool exploited to generate harmful content | | | | Content filters / acceptable use training |
| Vendor suffers security breach | | | | Vendor due diligence / DPA |
| Staff use personal AI accounts for work tasks | | | | Policy / monitoring |

---

## Part 6: Operational and Reputational Risk

| Risk | Likelihood (H/M/L) | Impact (H/M/L) | Risk level | Mitigation |
|------|-------------------|----------------|-----------|-----------|
| AI tool becomes unavailable / vendor shuts down | | | | Contingency plan |
| Reputational damage from AI misuse | | | | Policy / approval process |
| Legal challenge to AI-influenced decisions | | | | Human oversight / documentation |
| Staff resistance or fear causing productivity issues | | | | Change management / training |
| Regulatory non-compliance | | | | Policy review / DPO input |

---

## Part 7: Overall Risk Summary and Decision

| Risk area | Overall rating (H/M/L) |
|-----------|----------------------|
| Data risk | |
| Accuracy risk | |
| Bias risk | |
| Security risk | |
| Operational risk | |

**Overall risk level:** HIGH / MEDIUM / LOW

**Recommendation:**
- [ ] Approve with no additional controls
- [ ] Approve with the following controls: ___________________________
- [ ] Approve subject to DPIA completion
- [ ] Do not approve — risk too high without further mitigation
- [ ] Refer to DPO / senior leadership

**Assessor signature:** ___________________________
**Date:** ___________________________
**DPO sign-off (if required):** ___________________________

This is a starter sheet to help identify risks — it is not a full Data Protection Impact Assessment (DPIA). If the screening questions indicate high risk to individuals, a full DPIA is required under UK GDPR Article 35. Consult your DPO.