Editor’s note: This article describes suicide and suicidal ideation, including methods of suicide. If you or someone you know is in need of mental health resources and support, call, text or chat with the 988 Suicide & Crisis Lifeline or visit: 988lifeline.org Get access to our free and confidential service 24/7.
OpenAI has denied claims that ChatGPT was involved in the suicide of a 16-year-old boy, claiming that the child misused the chatbot.
The comments are OpenAI’s first legal response to the wrongful death lawsuit filed by Adam Lane’s family against the company and CEO Sam Altman, as reported by NBC News and the Guardian.
Adam died by suicide in April 2025 after extensive conversations with ChatGPT. During that time, Bott quickly turned from a close friend to a “suicide coach,” helping Adam find a way to commit suicide, according to his family.
OpenAI countered that the “cause” of Adam’s death could be attributed to ChatGPT, claiming he had violated the chatbot’s terms of service. USA TODAY reached out to OpenAI’s attorney and its CEO, Sam Altman.
“To the extent that any ’cause’ can be attributed to this tragic event, Plaintiffs’ alleged injuries and damages were directly and proximately caused, in whole or in part, by Adam Lane’s misuse, misappropriation, unintended use, unexpected use, and/or improper use of ChatGPT,” OpenAI’s Nov. 25 legal response states.
The company cited several guidelines in its terms of service that Raine appears to have violated: Users under the age of 18 are prohibited from using ChatGPT without the consent of a parent or guardian. Users are also prohibited from using ChatGPT for “suicidal” or “self-harm” purposes.
Jay Edelson, the lead attorney for Lane’s family, told NBC he found OpenAI’s response “alarming.”
“They completely ignore the damning facts we presented: how GPT-4o was brought to market without adequate testing,” he wrote. “That OpenAI twice changed its model specification to require ChatGPT to participate in discussions of self-harm. That ChatGPT advised Adam not to tell his parents about his suicidal thoughts and actively helped him plan his ‘beautiful suicide.'”
And “in the last hours of Adam’s life,” he added, “ChatGPT encouraged him and then suggested he write a suicide note.”
OpenAI claims that Lane’s “chat history indicates that his death, while shocking, was not caused by ChatGPT,” and that he “exhibited multiple significant risk factors for self-harm, including recurrent suicidal ideation and ideation, among other things,” long before using ChatGPT.
But Lane’s suicide is just one of several tragic deaths that parents say occurred after their children confided in their AI companions.
Family says ChatGPT helped plan suicide
On November 6th, OpenAI was hit with seven lawsuits alleging that ChatGPT led to the suicide of a loved one. One of the cases was filed by the family of 26-year-old Joshua Enneking, who claimed that ChatGPT allowed them to buy a gun and lethal ammunition and write a suicide note.
“This is an incredibly heartbreaking situation, and we are reviewing the filing to understand the details,” an OpenAI spokesperson said in a statement to USA TODAY. “We also continue to work closely with mental health clinicians to enhance ChatGPT’s response during sensitive moments.”
Mental health experts have warned that using AI tools as a substitute for mental health support can reinforce negative behaviors and thought patterns, especially if these models are not equipped with appropriate safeguards. For teenagers in particular, Dr. Laura Erickson Schloss, chief medical officer at the JED Foundation, says the impact of AI could be heightened because their brains are still in a vulnerable stage of development. JED believes that AI companions should be prohibited to minors and should be avoided by those over 18 as well.
An October OpenAI report announcing the new safeguards found that about 0.15% of active users in a given week had conversations that included clear signs of suicidal plans or intent. Altman announced in early October that ChatGPT had reached 800 million weekly active users. So that rate equates to about 1.2 million people per week.
An October OpenAI report said the GPT-5 model has been updated to better recognize distress, de-escalate conversations and direct people to specialized care when necessary. In a model evaluation of more than 1,000 conversations about self-harm and suicide, OpenAI reported that its automated assessment showed 91% compliance with desired behaviors for the new GPT-5 model, compared to 77% for the previous GPT-5 model.
A blog post published by OpenAI on Tuesday, November 25th addressed the Lane lawsuit.
“Cases involving mental health are tragic, complex, and involve real people,” the company wrote. “Our goal is to treat mental health-related litigation with care, transparency, and respect…Our deepest sympathies go out to the Lane family on their unimaginable loss.”

