Lawyers highlight the urgent need for clear rules as AI tool usage blurs the line between advice and information
An OpenAI policy update that reminds users not to rely on its tools for legal advice will reportedly not change how ChatGPT behaves, but lawyers say it highlights a question that requires clarity as people increasingly turn to generative AI services for help with legal issues: where’s the line between legal advice and legal information?
Most provinces and territories in Canada explicitly ban non-lawyers from dispensing legal advice. While there are some exceptions to this rule – licensed paralegals and nonprofit employees in certain jurisdictions can provide legal advice under limited circumstances – breaching the rule generally constitutes an unauthorized practice of law, which can result in penalties that include prosecution.
These restrictions don’t apply to the provision of legal information – typically defined as general information about the law and the legal system – which does not qualify as practising law.
“There’s always been an open question about what is the difference… between legal information and legal advice,” says Colin Lachance, founder of AI-powered legal learning platform LawQi. “Think about the public interest clinics that share legal information – they’re always walking this line,” he says, adding that public law libraries face the same challenge.
Mark Doble, chief executive officer at legal tech company Alexi, says the general framework for distinguishing between legal advice and legal information is “horribly ambiguous,” leaving many lawyers with a “very, very poor” understanding of what activities qualify as practising law and therefore require a license to perform.
While Doble says this was the case even before the introduction of modern AI tools, “the difficulties and complexities now have exploded.”
He adds, “The current modern AI systems and their capabilities are demanding much clearer frameworks for what is considered to be engaging in the practice of law, whether that’s a human or an AI that's doing it.”
In an update of its usage policy that took effect on Oct. 29, OpenAI barred users from employing its tools for the “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
The updated policy also stated that users cannot use its services to automate “high-stakes decisions in sensitive areas without human review,” such as those involving legal issues.
OpenAI did not respond to Canadian Lawyer’s inquiry on Wednesday about whether these provisions are new. However, in response to a now-deleted post on X that claimed the updated policy meant that ChatGPT would no longer provide health or legal advice, Karan Singhal, who leads OpenAI’s health AI team, said this was not a new change and that “ChatGPT has never been a substitute for professional advice.”
The company’s updated policy does not define “legal advice.” This lack of guidance echoes the approach of Canada’s provincial and territorial legal regulators, which list giving legal advice among the practices that qualify as practising law, alongside other activities like drafting certain legal documents or negotiating settlements on behalf of others, but do not provide much clarity on what actions might count as giving legal advice.
Doble says such ambiguity is likely intentional, since it gives regulators more flexibility to respond to breaches on a case-by-case basis.
“Most of the unauthorized practice of law regulations have been targeted towards charlatans: people pretending to be lawyers, faking licenses, fraudulent activities, opening up a law firm when they don’t have a license or insurance or have complied with any of the regulatory framework,” Doble says.
“These types of activities were what was in the regulators' minds when they… drafted [rules around the unauthorized practice of law], not AI engaging in the practice of law.”
However, he says regulators need to start accounting for the fact that more people are trying to use AI tools to resolve legal issues, and step in with clear definitions that can steer individuals – and the generative AI tools they’re using – away from practising law without authorization.
“I’m not saying that there needs to be this wholesale crackdown on the use of ChatGPT… but regulators can’t be silent,” Doble says. “They’ve been completely silent and perhaps even completely unaware that ChatGPT [and other generative AI tools are] engaging in the provision of legal services.”
Lachance argues that one way generative AI tools can provide legal information responsibly is by including simple disclaimers about their limitations.
The lawyer shared an experiment he conducted with ChatGPT this week to assess how the tool determines whether a user is asking for legal advice – a request it is theoretically programmed to reject. Despite this restriction, Lachance found that certain prompts elicited answers from ChatGPT that could be construed as legal advice.
If OpenAI is not going to strictly enforce its ban on providing legal advice, “it would be helpful to public use if they had some sort of disclaimer at the end of an answer,” Lachance says.
While Doble says many generative AI tools likely engage in the practice of law, he argues that it’s up to regulators, not AI companies, to define what that means, so that companies have clear rules to follow.
He adds that OpenAI’s policy update appears to reflect the company’s understanding of how its products are being used. “This is an attempt to try and mitigate any sort of risk that might come from their product actually engaging the practice of law,” he says.