Why you need one
63% of UK organisations have no AI acceptable use policy. That means staff are making their own decisions about which AI tools to use, what data to share, and how to handle the output. Without written rules, there's no accountability when something goes wrong.
An AI acceptable use policy isn't about banning AI. It's about setting clear boundaries so your team can use AI productively without putting the business at risk.
What a good policy covers
1. Which tools are approved
List every AI tool your business has formally approved for use. For each tool, specify what it can be used for and any restrictions. Be specific — "ChatGPT is approved for drafting internal communications only" is useful. "AI tools may be used at the employee's discretion" is not.
2. Which tools are prohibited
List tools that are explicitly banned and explain why. This might include tools with poor data handling practices, tools that store data outside the UK, or tools that use inputs to train their models.
3. Data handling rules
This is the most important section. Specify exactly what types of data can and cannot be entered into AI tools:
- Never enter: Client personal data, financial records, legal case details, health information, passwords, API keys
- Allowed with caution: Internal meeting notes (anonymised), general research queries, public information
- Freely allowed: Grammar checking of non-sensitive text, brainstorming, general knowledge questions
4. Breach procedure
What happens if someone breaks the rules? Define a proportionate escalation process — from informal conversation to formal warning to disciplinary action. The goal isn't to punish people but to ensure compliance is taken seriously.
5. New tool requests
Create a process for staff to request approval for new AI tools. This prevents the "just start using it" culture that creates Shadow AI in the first place.
6. Staff acknowledgement
Include a sign-off form so every team member formally acknowledges they've read and understood the policy. This is essential for accountability.
Common mistakes
Too long — If your policy is 30 pages, nobody will read it. Aim for 5-8 pages maximum. Use clear headings and bullet points.
Too vague — "Use AI responsibly" means nothing. Staff need specific rules they can follow without interpretation.
Template-based — Generic templates don't account for your sector, your tools, or your data. A law firm's policy should look very different from a marketing agency's.
No training — A policy without training is a document staff acknowledge but never read. You need both.
Set and forget — AI tools and regulations change constantly. Your policy needs reviewing at least quarterly.
How to roll it out
1. Draft the policy based on your audit findings — you need to know what tools are in use before you can write rules about them
2. Review with leadership to ensure buy-in from the top
3. Train the whole team — walk through the policy, explain the reasoning, answer questions
4. Get sign-off from every team member
5. Set a review date — quarterly at minimum
6. Make it accessible — pin it in Slack, add it to onboarding, include it in the staff handbook