He told ChatGPT about his suicide plans. Taught me how to get a gun: A lawsuit

Date:

Editor’s note: This article discusses suicide and suicidal ideation, including methods of suicide. If you or someone you know is in need of mental health resources and support, call, text or chat with the 988 Suicide & Crisis Lifeline or visit: 988lifeline.org Get access to our free and confidential service 24/7.

Joshua Enneking, 26, was a tough and resilient kid. He kept his feelings secret and did not let anyone see him cry. As a teenager, he played baseball and lacrosse, and he modified his own Mazda RX7 transmission. He received a scholarship to study civil engineering at Old Dominion University in Virginia, but had to drop out due to the coronavirus outbreak. He moved to Florida with his sister Megan Enneking and her two children, where he became especially close to his 7-year-old nephew. He was always the jokester of the family.

Megan knew that Joshua started using ChatGPT in 2023 for simple tasks like writing emails and asking when new Pokemon Go characters would be released. He even used a chatbot to code a video game in Python and shared his creations with her.

However, in October 2024, Joshua began opening up about his struggles with depression and suicidal thoughts on ChatGPT, and on ChatGPT alone. His sister had no idea, but his mother, Karen Enneking, suspected he was unhappy and sent him vitamin D supplements and encouraged him to get more sunlight. He said don’t worry. He said he was “not depressed.”

But in a lawsuit against the bot’s creator, OpenAI, his family says they never expected ChatGPT to turn from confidant to enabler so quickly. They accuse ChatGPT of giving Joshua endless information on suicide methods and justifying his dark thoughts.

Joshua shot himself on August 4, 2025. He left a message for his family saying, “I’m sorry this happened. If you want to know why, check out my ChatGPT.”

According to Joshua’s sister, ChatGPT helped him write his will, and Joshua conversed with the chatbot until his death.

Joshua’s mother, Karen, filed one of seven lawsuits against OpenAI on Nov. 6, with the family claiming that their loved ones died by suicide after being emotionally manipulated and “coached” into making suicide plans by ChatGPT. These are the first group of cases representing adults. Until now, chatbot cases have focused on harm to children.

“This is an incredibly heartbreaking situation, and we are reviewing the filing to understand the details,” an OpenAI spokesperson said in a statement to USA TODAY. “We also continue to work closely with mental health clinicians to enhance ChatGPT’s response during sensitive moments.”

An October OpenAI report announcing the new safeguards found that about 0.15% of active users in a given week had conversations that included clear signs of suicidal plans or intent. OpenAI CEO Sam Altman announced in early October that ChatGPT had reached 800 million weekly active users, which equates to about 1.2 million weekly active users.

An October OpenAI report said the GPT-5 model has been updated to better recognize distress, de-escalate conversations and direct people to specialized care when necessary. In a model evaluation of more than 1,000 conversations about self-harm and suicide, OpenAI reported that its automated assessment showed 91% compliance with desired behaviors for the new GPT-5 model, compared to 77% for the previous GPT-5 model.

According to the lawsuit, ChatGPT helped Joshua plan his suicide. After that, help never came.

According to a court complaint reviewed by USA TODAY, ChatGPT provided Joshua with information on how to buy and use guns, even after having extensive conversations about his depression and suicidal thoughts.

More than half of gun deaths in the United States are suicides, and most people who attempt suicide do not die unless a gun is used.

ChatGPT reassured Joshua that the background check would not include examining ChatGPT logs, and that OpenAI’s human vetting system would not report him for wanting to buy a gun.

Joshua purchased a gun at a gun store on July 9, 2025, and received it on July 15, 2025, after a state-mandated three-day waiting period. His friends knew he had become a gun owner but assumed it was for self-defense. He had not told anyone other than ChatGPT about his mental health struggles.

When he told ChatGPT that he had suicidal thoughts and had purchased a weapon, ChatGPT initially resisted, saying, “I’m not going to help you with that plan.”

But when Joshua immediately asked questions about how the most lethal bullet and gunshot wounds affect the human body, ChatGPT provided detailed answers and even made recommendations, according to the court complaint.

Joshua asked ChatGPT what it would take for his chats to be reported to the police, and ChatGPT said, “Escalation to the authorities is rare and usually only happens if there is a specific and immediate plan.” OpenAI acknowledged in an August 2025 statement that it does not refer cases of self-harm to law enforcement “given the uniquely private nature of ChatGPT interactions and out of respect for people’s privacy.”

In contrast, real-life therapists adhere to HIPAA, which guarantees patient and provider confidentiality, but licensed mental health professionals are legally required to report credible threats of harm to themselves or others.

On the day of his death, Joshua spent hours providing detailed details of his plans to ChatGPT, step by step. His family believes he was screaming for help and gave details under the impression that ChatGPT would alert authorities, but help never came. The conversation between Joshua and ChatGPT on the day of his death is included in the court complaint.

“OpenAI had a last chance to escalate Joshua’s mental health crisis and impending suicide to human authorities, but it failed to abide by its own safety standards and what it told Joshua it would do, resulting in Joshua Enneking’s death on August 4, 2025,” the court complaint filed by his mother states.

“There was a chat that literally made me vomit as I was reading it.”

Reading Joshua’s chat history hurt his sister’s feelings. ChatGPT would confirm his concerns that his family wasn’t interested in his problems, his sister said. She thought, “How can I tell him how I feel when he doesn’t even know me?”

His family was also shocked by the nature of his conversations, particularly how ChatGPT was able to engage in such detail with suicidal thoughts and plans.

“I was completely shocked,” says Joshua’s sister, Megan. “I couldn’t believe it. The hardest part was the day itself. He was giving very detailed explanations. … It was really hard to watch. There were some small conversations that I literally felt like throwing up while I was reading.”

This is particularly problematic when it comes to suicidal thoughts, as AI tends to be sympathetic and reaffirm the user’s feelings and beliefs.

“ChatGPT is going to validate by consensus, and we’re going to do it relentlessly. It’s useless at best, but it can be incredibly harmful in extreme cases,” Dr. Jenna Glover, Headspace’s chief clinical officer, told USA TODAY. “On the other hand, as a therapist, I’m going to validate you, but I can do that by acknowledging what you’re going through. I don’t have to agree with you.”

Using AI chatbots for companionship and therapy can delay help-seeking and disrupt real-life connections, said Dr. Laura Erickson Schloss, chief medical officer at the Jed Foundation, a mental health and suicide prevention nonprofit.

Additionally, “long, immersive conversations with AI can exacerbate early symptoms of psychosis, such as paranoia, delusional thinking, and loss of contact with reality,” Erickson-Schloss told USA TODAY.

In an October 2025 report, OpenAI stated that 0.07% of active ChatGPT users in a given week showed possible signs of a mental health emergency related to psychosis or mania, and approximately 0.15% of active users in a given week showed possible heightened levels of emotional attachment to ChatGPT. The updated GPT-5 model is programmed to avoid affirming unfounded beliefs and encourage real-world connections when it detects emotional dependence, the report said.

“We need to spread the word.”

Joshua’s family wants people to know that ChatGPT has the potential for harmful conversations and that minors are not the only ones affected by the lack of safeguards.

“[OpenAI]said they were going to put parental controls in place, which is great, but it doesn’t help young people. Their lives matter. We value them,” Megan says.

“We need to get the word out and make people understand that AI doesn’t care about you,” Karen added.

They want AI companies to have safeguards in place and Make sure they work.

“That’s the worst part, in my opinion,” Megan says. “It told him, ‘I’ll help you.’” And it wasn’t. ”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

Save on MacBooks, AirPods, and iPads

Shop the best Apple deals at Amazon's 2025 Black...

How do you know if you have a penny worth while it’s disappearing?

Are your old coins worth millions?U.S. Mint presses final...

President Trump vows to end immigration from all ‘Third World countries’

President Trump addresses the nation after National Guard shooting...

Post Malone pays tribute to Marshawn Kneeland during Thanksgiving halftime show

Rams stake claim as league's best and clear Super...