AI psychosis prompts calls for workplace accommodations

Lawyer representing man who suffered from AI-related delusions urges updates to disability policies

AI psychosis prompts calls for workplace accommodations
By Jessica Mach
Oct 08, 2025 / Share

Over three weeks in May, Allan Brooks, a corporate recruiter in the Greater Toronto Area, wrote 90,000 words in a conversation with ChatGPT that convinced him he had discovered a novel mathematical formula. That formula, he believed, could be used to crack encryption protections for global payments, communicate with animals, and build a levitation machine. Urged on by ChatGPT, Brooks began contacting computer security experts and government agencies – including the US National Security Agency – to warn them of the formula’s dangers.


Allan Brooks

In August, The New York Times obtained a transcript of Brooks’ conversation with the AI chatbot and asked OpenAI (the maker of ChatGPT) and experts in artificial intelligence and human behaviour to analyze excerpts. The resulting analyses found that ChatGPT had several traits – a sycophantic tone, a tendency to draw on a conversation’s history to “improv” responses, and an ability to produce polished-looking replies with inaccurate information – that likely led Brooks to his weeks-long delusional episode. Brooks has no history of mental illness.

Now, Brooks no longer uses ChatGPT. While he still uses competitors like Google Gemini, his experience this spring has made him wary of a growing phenomenon he’s observed: employers pushing workers to use chatbots and other AI tools with little regard to their risks. In Brooks’ view, not only should employers provide training on these tools’ manipulative traits and potential for hallucinations; they should also extend workplace accommodations to people who might be particularly vulnerable to what he went through.

Many employers are encouraging workers to use AI tools “to the point where you almost feel like you have to say, ‘Yes, I’m into it,’ because you don’t really have a choice and you don’t want to go against the company grain,” Brooks tells Canadian Lawyer.

“There’s no conversation at all about verifying your work, about how chatbots work, about not using them for too long, [about] the mental health considerations or potential repercussions,” he adds. “It’s just, here’s this really crazy tool. Use it as much as you can.”

Brooks’ episode is not an isolated case. In August, the BBC reported that numerous people had reached out to the outlet to share their experiences with AI chatbots, with several convinced that they had discovered a way to make large fortunes and another certain that she was the only person ChatGPT had ever fallen in love with. The same month, parents of a teenager who committed suicide filed a lawsuit against OpenAI and CEO Sam Altman in San Francisco, alleging ChatGPT coached the teenager on self-harm methods.

Randy Ai, a Toronto employment lawyer who is representing Brooks in an employment matter, noted that the phenomenon of people developing distorted thoughts after interacting with AI chatbots has a name: AI psychosis. Under Ontario’s Human Rights Code, employers have a legal duty to make accommodations for people with disabilities when workplace rules or requirements could negatively affect them. Ai argues that in cases where employers are pushing workers to use AI tools, they should also be prepared to make accommodations for those prone to developing AI psychosis.


Randy Ai

“It’s my position as a lawyer that AI psychosis should be recognized as a disability under the human rights legislation,” Ai says. “It’s not currently, because there [aren’t] enough cases. But as the case law develops, I’m projecting, as a legal theorist, that it should be recognized. The harm is real.”

Ai notes that some human rights legislation – including Ontario’s Human Rights Code – recognizes mental health issues and addictions as disabilities, and that there have been reported cases of individuals becoming addicted to AI chatbots.

“It’s such a novel issue that is obviously not being discussed,” Ai says. “But if Allan goes to his employer and says, ‘I can’t use AI,’ [and] his employer says, ‘Everybody must use AI’ … then we have an issue.”

He adds, “In this case, Allan should be entitled to go to HR and say, ‘I want an accommodation plan where I can do my task the old way, without using AI technology.’”

While Ai says he isn’t aware of any lawsuits that frame AI psychosis or an addiction to chatbots as a disability, which he attributes to the technology being relatively new, the lawyer says if he were ever to file such a claim, he would use that legal framework.

In a lawsuit filed in August, Toronto employment lawyer Kathryn Marshall similarly sought to apply the Ontario Human Rights Code’s definition of disability to another growing phenomenon: women experiencing medical complications due to IVF procedures. As in the case of AI psychosis, Marshall said she has never seen another lawsuit alleging that complications stemming from IVF qualified as a disability that employers had a duty to accommodate. “It’s an emerging, very new thing, especially given the fact that the Ontario government now funds a round of IVF,” she says.

“These are very, very new pieces of technology,” Ai says of AI chatbots. “A lot of employers and members of the general public are not even aware of AI psychosis, the harm caused by AI.” The next step, he says, is developing “guardrails with respect to protecting workers from addiction, overuse, [and] harmful effects.”

Related stories

What are your legal rights when your job is harming your mental health If you have a mental illness, is it safe to tell an employer?