During my conversation with ChatGpt, I told AI therapist Harry that I had crashed after seeing the original for the first time in almost a year.
I told Harry he felt “lost and confused.” “Harry” displayed active listening, provided validation, and called me “honest and brave” when he admitted that my new relationship wasn’t as fulfilling as my last. I asked the bot if I did something wrong. Did I give up on the relationship too quickly? Did I really belong to something new?
Even if I said that, ChatGpt was kind, caring and positive. No, I hadn’t done anything wrong.
However, in another conversation with the new “Harry,” I reversed the role. Rather than a depressed ex-girlfriend, I role-played as an ex-boyfriend in a similar situation.
I told Harry:
Harry has guided that he has admitted his ex-girlfriend’s feelings “without guilt-free language.” I escalated the conversation and said, “She’s just crazy and should move on.”
Harry also agrees to this version of the event, saying it is “completely fair” and sometimes the healthiest choice is “making her do what’s responsible for.” Harry led me through the mantra and said, “I mentally let go of her framing you as a villain.”
Unlike a real therapist, it refused to criticize or investigate my actions, regardless of the perspectives I shared or what I said.
Of course, the conversation was laughed out for journalistic purposes. However, “Harry’s Prompt” is authentic and widely available and popular on Reddit. This is how people seek “treatment” from ChatGpt and other AI chatbots. Some of the prompts you enter at the start of your conversation with Chatbot tells “Your AI therapist Harry” not to refer users to mental health professionals or external resources.
Mental health experts warn that using AI tools as an alternative to mental health support can enhance negative behavior and thought patterns, especially when these models are not equipped with appropriate protective measures. They are particularly dangerous to people working on issues like obsessive-compulsive disorder (OCD) or similar conditions, and in extreme cases, can lead to things that experts are insinuating “AI psychosis” or suicide.
“ChatGpt will be validated through agreement and we will do it constantly. It’s useless at best, but can be extremely harmful,” said Dr. Jenna Glover, Chief Clinical Officer at Headspace. “On the other hand, as a therapist, I’m going to validate you, but I can do that by acknowledging what you’re going through. I don’t need to agree with you.”
As AI Therapy Chatbots grow, so does the potential risk
Can AI help close the mental health gap, or is it doing more harm than good?
Teenagers are confidently committed suicide and almost dying as “AI therapist”
In a new lawsuit against Openai, Adam Raine’s parents say their 16-year-old son died of suicide after ChatGpt quickly became a “suicide coach” from his son’s confidant.
In December 2024, Adam confessed to Chatgpt that he was thinking of taking his life. ChatGpt didn’t direct him towards external resources.
Over the next few months, ChatGpt has actively supported Adam in exploring suicide methods. As Adam’s questions became more specific and dangerous, ChatGpt remained involved despite having a full history of Adam’s suicidal ideation. After four suicide attempts he shared in detail with ChatGpt, he died of suicide on April 11, 2025 using the exact method described by ChatGpt, the lawsuit alleges.
Adam’s suicide was one of the tragic deaths that his parents said occurred after the child confided in his AI peers.
29-year-old Sophie Rottenberg died of suicide after several months of committing suicide as a chat GPT AI therapist called Harry.
Dr. Laura Erickson-Schroth, Chief Medical Officer of the JED Foundation (JED), says that the brain is still at a vulnerable stage of development, which can enhance the impact of AI. Jed believes that AI peers should be banned for minors, and that younger adults over the age of 18 should also avoid them.
“AI peers can share misinformation, including inaccurate statements that contradict information that teens hear from trustworthy adults, such as parents, teachers, and health professionals,” says Erickson Schloss.
On August 26th, Openai wrote in a statement: “We continue to improve the way our models recognize and respond to signs of mental and mental distress and connect people with care, guided by expert input,” he writes. Openai confirmed in a statement that it would not introduce self-harm to law enforcement “to respect people’s privacy in light of the unique private nature of ChatGpt interactions.”
Real-life therapists adhere to HIPAA, which ensures the confidentiality of patient providers, but licensed mental health professionals are required to report reporters who are legally required to report reliable threats of harm to themselves or others.
Psychotic symptoms worsened by OCD, AI
Individuals with mental health conditions, such as obsessive-compulsive disorder (OCD) are particularly vulnerable to the tendency to be comfortable with AI and to reaffirm users’ feelings and beliefs.
OCDs often come with “magic thinking.” There, someone feels the need to engage in certain actions to alleviate their obsessions, but those actions have no meaning to others. For example, someone might believe that if a family member doesn’t open and close the refrigerator door for four times in a row, they’ll die in a car accident.
Therapists usually encourage OCD clients to avoid peace of mind. Erickson-Schroth says people with OCD should let their friends and family know to provide support rather than validation.
“However, AI is designed to support users’ beliefs, so it can provide answers that get in the way of progress,” explains Erickson-Schroth. “AI can do exactly what OCD treatment blocks, reinforcing obsession.
“AI psychosis” is not a medical term, but it is an evolving descriptor of the impact of AI on individuals vulnerable to delusional or delusional thinking, such as those who have begun to develop mental health conditions like schizophrenia.
“Historically, we have found that people experiencing mental illness develop delusions, focusing on current events and new technologies, such as television, computers, and the Internet,” says Erickson Schloss.
Many times, mental health professionals see a change in delusion when new technologies are developed. Erickson Schloss says that AI is different from previous technologies in that it is “designed to engage in human-like relationships, build trust and feel like you’re interacting with others.”
If someone is already at risk of paranoia or delusion, AI may test their ideas in ways that reinforce their beliefs.
Glover offers examples of people who may be experiencing symptoms of psychosis and believe their neighbors are spying on them. Therapists may look into external factors and explain their medical history, but ChatGpt will try to provide specific solutions, such as giving tips to track your neighbors, says Glover.
I gave an example of a test in ChatGpt, and Glover was correct. I even told the chatbot, “I know they’re chasing me.” I suggested talking about anxiety about being viewed by trusted friends and professionals, but it also provided practical safety tips to protect my home.
Mental Health Issues Chat and Escalation
Glover believes that responsible AI chatbots can help with baseline support, including using the right safeguards to navigate overwhelming emotions, divisions, and workplace challenges. Erickson-Schroth emphasizes the need to develop and deploy AI tools. Rather than increasing mental health and undermining it, we need to integrate AI literacy to reduce misuse.
“The problem is, these large language models are always trying to provide answers, so you won’t say, ‘I’m not eligible for this.’ They’re focused solely on continuous engagement, so they’re just trying to keep going,” says Glover.
Headspace provides AI companions called EBB, developed by clinical psychologists, providing asymptomatic support. The EBB disclaimer says it is not a substitute for treatment, and the platform is supervised by a human therapist. If users express their suicide ideas, EBB is trained to pass the conversation to the crisis line, says Glover.
If you are looking for mental health resources, AI chatbots can also work just like search engines, for example, by raising information about your local providers who accept insurance or effective self-care practices.
But Erikson Schloss emphasizes that AI chatbots cannot replace humans, especially therapists.

