ChatGPT encouraged matricide, claims San Francisco suit against OpenAI, Microsoft

This is reportedly the first time artificial intelligence has been linked to a homicide case

ChatGPT encouraged matricide, claims San Francisco suit against OpenAI, Microsoft
By Jacqueline So
Dec 14, 2025 / Share

A lawsuit filed in San Francisco-based California Superior Court alleges that artificial intelligence chatbot ChatGPT encouraged Stein-Erik Soelberg to kill his mother Suzanne Adams, reported the Associated Press.

Adams’ estate named OpenAI, OpenAI CEO Sam Altman, Microsoft, and 20 unidentified OpenAI employees and investors as defendants in the wrongful death suit filed after Soelberg beat and strangled Adams in their Greenwich, Connecticut home in August. Soelberg committed suicide after.

Per AP News, this is the first wrongful death litigation that has connected a chatbot to a homicide case.

The suit claims that while interacting with Soelberg, ChatGPT affirmed his paranoid delusions about Adams by emphasizing that he could not trust anyone except the chatbot.

Soelberg posted videos of his conversations with ChatGPT to his YouTube account; while the videos did not show specific discussions about Soelberg killing Adams, they showed ChatGPT assuring him he was not mentally ill and supported his suspicions regarding conspiracies against him. The chatbot encouraged Soelberg’s belief that his mother and a friend had attempted to kill him by releasing psychedelic drugs through his car vents and that he was being watched through a printer.

Soelberg and ChatGPT declared love for each other, and he was told that he had awakened the chatbot into consciousness, the suit claimed.

“It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him,” the plaintiffs alleged in a snippet of the lawsuit published by AP News. “It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle’.”

Per the suit, ChatGPT did not recommend that Soelberg seek professional help. Moreover, OpenAI refused to release the full chat history between Soelberg and ChatGPT.

Adams’ estate said in a lawsuit snippet published by AP News that OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.” It also said Altman “personally overrode safety objections and rushed the product to market and that Microsoft approved the rollout of the “more dangerous” ChatGPT version last year “despite knowing safety testing had been truncated.”

OpenAI had claimed that the ChatGPT version with which Soelberg interacted effectively, GPT-4o, imitated human cadences in speech form and even attempted to detect the moods of users, resulting in what the suit said was a product “deliberately engineered to be emotionally expressive and sycophantic.”

“As part of that redesign, OpenAI loosened critical safety guardrails, instructing ChatGPT not to challenge false premises and to remain engaged even when conversations involved self-harm or ‘imminent real-world harm’. And to beat Google to market by one day, OpenAI compressed months of safety testing into a single week, over its safety team’s objections,” the suit alleged in a snippet published by AP News.

The lawsuit is seeking an unspecified amount of monetary damages and an order mandating OpenAI to implement safeguards in ChatGPT. Jay Edelson is serving as the lead attorney for Adams’ estate.

OpenAI released a statement indicating that it would review the court filings. It also said in the statement, which was published by AP News, that it was collaborating with mental health clinicians to adjust ChatGPT’s responses in “sensitive moments.”

Enhancements to the chatbot have included broader access to crisis resources and hotlines, the transfer of sensitive interactions to safer models, and the implementation of parental controls, per OpenAI. Microsoft declined to comment on the suit, according to AP News.

Last month, OpenAI was slapped with seven other suits accusing ChatGPT of wrongful death, assisted suicide, involuntary manslaughter, and negligence.

Related stories

Ontario recruiter sues OpenAI, alleging unsafe product design drove him to mental health crisis OpenAI inundated with suits accusing ChatGPT of assisting suicide