Clio's CEO said LLMs from companies like OpenAI are 'generic' rather than tailored for lawyers
Unveiling a suite of new products at ClioCon, the annual conference hosted by British Columbia-based legal tech company Clio, company chief executive officer and founder Jack Newton argued unreliable results from generative AI tools stem from one “fundamental issue”: imprecise data.
“We’ve all seen the headlines of AI making mistakes, hallucinating cases that never existed,” Newton told a ClioCon audience last week in Boston, MA. “Lawyers being sanctioned for doing something that, to be fair, we’ve been trained to do over the last 20 years, which is to trust the answer that a computer gives us.”
“All of the foundational large language models – those from companies like OpenAI and Anthropic – are powerful but they’re general purpose, they’re generic, they’re trained on the open web, not on real legal data,” Newton added. “That matters because in legal, the performance of AI depends entirely on the quality behind its answers.”
Newton’s remarks prefaced his announcement that Vincent, the AI legal assistant tool launched by legal tech company vLex in 2018, will be the second product Clio offers under its new enterprise division. That division will cater to large law firms and corporate legal departments.
Clio bought the Barcelona-born vLex this summer in a US$1 billion deal – reportedly the largest acquisition in legal tech history. According to Newton, the advantage that sets Vincent apart from other AI legal assistant tools is the fact that it is trained on vLex’s vast legal database, which expanded to encompass more than one billion legal documents from 180 countries in 2023, when vLex acquired legal tech company Fastcase.
“Unlike generic AI that is trained on data available on the open web, Vincent is legal AI that understands the law,” Newton argued. “When AI is grounded in legal data, real cases and real decisions, it doesn’t just produce more accurate answers – it gains new capabilities.”
Newton’s comments come less than two years after a BC Supreme Court Justice issued Canada’s first decision on the use of AI-hallucinated case law in legal proceedings. Since then, Canadian courts and tribunals have released more than 30 other decisions in cases that involved known uses of AI-hallucinated content, according to a case tracker created by a senior research fellow at HEC Paris.
In those cases, AI tools either misrepresented or fabricated case law.
Some lawyers have reportedly shied away from using AI tools in their practice, citing their potential unreliability. But Newton argued Clio’s decision to outfit some of its products with AI capabilities could offer a significant advantage to law firms and the legal profession.
“It's an opportunity for four times the clients to walk through your doors, four times the matters being opened, four times the revenue flowing into your firm,” Newton said. “Importantly, in realizing this market opportunity, we’ll be addressing the access to justice gap.”
Following applause from the audience, Newton added, “AI is not here to replace legal professionals. AI is here to amplify your impact.”