Why professional services firms are most at risk
Professional services firms — law firms, accountancy practices, consultancies, financial advisers, HR firms — handle some of the most sensitive data in the economy. Client legal matters, financial records, employment disputes, tax affairs, medical information.
These firms also tend to have highly educated, tech-savvy staff who are early adopters of productivity tools. They discovered AI tools early, they use them heavily, and they're entering sensitive client data into them every day.
The combination of highly sensitive data and enthusiastic AI adoption makes professional services the highest-risk sector for Shadow AI.
The unique challenges
Client confidentiality obligations
Professional services firms operate under strict confidentiality duties — often imposed by regulators (SRA, ICAEW, FCA) as well as client contracts. Using AI tools to process client data may breach these obligations, even if the intent is purely to work more efficiently.
Regulatory scrutiny
Professional services regulators are increasingly focused on technology governance. The SRA has issued guidance on AI use in law firms. The ICAEW has published frameworks for AI in accounting. Firms that can demonstrate robust AI governance are better positioned for regulatory inspections.
Client expectations
Sophisticated clients are starting to ask questions about how their data is handled in relation to AI. "Do your staff use AI tools with our data?" is becoming a standard due diligence question. Firms that can answer confidently — with policies, training records, and tool registers to back it up — have a competitive advantage.
Partnership structures
In many professional services firms, each partner operates somewhat independently. This makes it harder to enforce a single approach to AI governance. A policy that works needs buy-in from every partner, not just the managing partner.
A practical starting point
Step 1: Acknowledge the problem
The first step is accepting that your staff are almost certainly using AI tools with client data right now. This isn't a criticism of your team — they're trying to work smarter. But the lack of oversight creates real risk.
Step 2: Run an audit
Survey your staff — anonymously if needed — to discover which AI tools they're using and what data they're entering. The results will almost certainly surprise you. Most firms discover 3-5x more AI tools in use than they expected.
Step 3: Write a policy
Based on your audit findings, write a clear AI acceptable use policy. This should be specific to your sector and your firm — not a generic template. It needs to cover approved tools, prohibited tools, data handling rules, and breach procedures.
Step 4: Train everyone
Distribute the policy with a training session. Walk through every section. Use real examples from your sector. Allow time for questions. This is where behaviour actually changes.
Step 5: Maintain and update
AI governance isn't a one-off project. New tools emerge every week. Regulations change every few months. Your policy needs quarterly reviews, your tool register needs maintaining, and your team needs access to an advisor who can answer questions as they come up.
The competitive advantage
Here's the thing most firms miss: AI governance isn't just about risk mitigation. It's a competitive advantage. Firms that can demonstrate to clients that they have robust AI governance — with policies, training records, and maintained tool registers — are more likely to win and retain business.
In a world where every firm is using AI, the ones that use it responsibly will stand out.