Law Firm Limits AI Access Amid Staff's Rising Use Concerns
Hill Dickinson restricts AI tool access after a surge in staff usage raised concerns about compliance with AI guidelines and data protection.

International law firm Hill Dickinson has restricted general access to several artificial intelligence (AI) tools after discovering a "significant increase in usage" among its staff. The firm observed that much of this usage did not comply with its AI guidelines, prompting the decision to limit access.
In an email to employees, Hill Dickinson's chief technology officer noted over 32,000 interactions with ChatGPT and 50,000 hits to Grammarly within a week. There were also more than 3,000 visits to DeepSeek, a Chinese AI platform recently banned from Australian government devices due to security concerns. The email warned of the "significant uptick" in the utilization of AI tools and the uploading of files to these tools.
Hill Dickinson, which has offices across England and internationally, aims to "positively embrace" AI while ensuring its safe and appropriate use. The firm's AI policy prohibits uploading client data to external platforms and requires staff to verify the accuracy of AI-generated responses. Access to AI tools will now be granted through a request process.
The Information Commissioner's Office (ICO), the UK's data regulator, advises companies to provide AI tools that comply with organizational policies rather than banning them outright. A spokesperson from the Solicitors Regulation Authority noted a "deficiency in digital skills" across all sectors in the UK, which could pose risks for firms and clients if legal professionals do not fully grasp the new technologies being adopted.
A survey by legal software provider Clio revealed that 62% of UK solicitors expect an increase in AI utilization over the next year, with law firms leveraging technology for tasks like document drafting, contract review and analysis, and legal research.
The use of AI in law firms presents both opportunities and challenges. A robust AI policy should include guiding principles, permitted uses, restrictions, monitoring and responsibility, and training and awareness. It is crucial to consider the ethical and legal implications of AI-driven decisions, including issues like hallucinations, bias, transparency, data privacy, and data confidentiality.