The short version
Shadow AI is the use of artificial intelligence tools by employees without the knowledge, approval, or oversight of their employer. It's the AI equivalent of shadow IT — and it's happening in almost every UK business right now.
Your staff are using ChatGPT to draft client emails. They're pasting financial data into Claude to summarise reports. They're running meeting notes through Otter.ai and uploading documents to Notion AI. Most of them signed up with personal email addresses, on free accounts, with no company oversight.
None of this is malicious. Your team are trying to work faster and smarter. The problem is that nobody told them where the line is — because in most businesses, there is no line.
Why it matters
Shadow AI creates three categories of risk that every business owner needs to understand:
1. Data leakage
When staff paste client information into AI tools, that data is processed by third-party servers — often located outside the UK. Many free AI tools explicitly state in their terms of service that user inputs may be used to train future models. That means your client's confidential information could end up in a training dataset accessible to millions of people.
2. Regulatory exposure
GDPR requires you to know where personal data is being processed and to have a lawful basis for that processing. If staff are entering personal data into AI tools you don't even know about, you can't document the processing, you can't demonstrate compliance, and you're exposed to enforcement action.
The EU AI Act, which comes into full enforcement in August 2026, adds another layer. Businesses using AI in hiring, customer decisions, or data processing need to understand whether they're affected and what obligations apply.
3. Contractual risk
Many client agreements — particularly in legal, financial, and professional services — contain confidentiality clauses and NDAs that prohibit sharing information with third parties. AI tools count as third parties. If a solicitor pastes case details into ChatGPT, that could constitute a breach of their duty of confidentiality.
How widespread is it?
The numbers are stark:
- 65% of AI tools in businesses operate without IT approval
- 63% of organisations have no AI acceptable use policy
- 52% of employees admit to using AI tools their employer hasn't approved
- The average business has 3-5x more AI tools in use than management is aware of
What you can do about it
The good news is that Shadow AI is a solvable problem. It doesn't require banning AI or installing surveillance software. It requires three things:
1. Visibility — Run a Shadow AI audit to discover every tool in use across your organisation. You can't manage what you can't see.
2. Policy — Write a clear, plain English AI acceptable use policy that tells staff which tools they can use, which are prohibited, and what data can never be entered into any AI tool.
3. Training — Train the whole team so they understand why it matters, what the rules are, and how to use AI productively without putting the business at risk.
Most businesses can go from zero visibility to full AI governance in under a month. The key is starting before something goes wrong — not after.
The bottom line
Shadow AI isn't a future risk. It's happening in your business right now. The question isn't whether your staff are using AI tools without approval — it's how many, with what data, and what you're going to do about it.