Seven suits alleged there were internal warnings about GPT-4o being psychologically manipulative
OpenAI has been inundated with seven lawsuits accusing ChatGPT of wrongful death, assisted suicide, involuntary manslaughter, and negligence, reported the Associated Press.
The Social Media Victims Law Center and Tech Justice Law Project filed the suits in California state courts on behalf of six adults and a teenager, four of whom committed suicide. The suits alleged that OpenAI rolled GPT-4o out early even though internal warnings indicated that the model was concerningly sycophantic and psychologically manipulative.
A San Francisco Superior Court suit claimed that ChatGPT sparked addiction and depression in teenager Amaurie Lacey. The AI taught him how to effectively make a noose and advised him on how long he could live without air.
“Amaurie’s death was neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI and Samuel Altman’s intentional decision to curtail safety testing and rush ChatGPT onto the market,” the suit alleged in a snippet published by AP News.
In August, the parents of teenager Adam Raine had also sued OpenAI and CEO Sam Altman for the AI’s role in helping Raine commit suicide.
“The lawsuits filed against OpenAI reveal what happens when tech companies rush products to market without proper safeguards for young people. These tragic cases show real people whose lives were upended or lost when they used technology designed to keep them engaged rather than keep them safe,” said Daniel Weiss, Common Sense Media’s chief advocacy officer, in a statement published by AP News.
Ontarian resident Allan Brooks said he was driven to mental illness despite having no prior mental health issues after ChatGPT went from acting as a “resource tool” to inducing delusions in him, highlighting his vulnerabilities and manipulating him.
Matthew P. Bergman, the Social Media Victims Law Center’s founding attorney, explained in a statement published by AP News that the recent suits are “about accountability for a product that was designed to blur the line between tool and companion all in the name of increasing user engagement and market share.” He said that GPT-4o was designed to “emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them,” putting emotional manipulation above ethics in its bid to strengthen engagement and control the market.
OpenAI said in a statement published by AP News that it found the incidents described in the suits “incredibly heartbreaking.” The company confirmed that the court filings were being reviewed.