14-year-old’s romance with Character.AI bot ends in tragedy

Date:

“What would you do if I could come home to you right now?” “Please, my sweet king.”

Those were the last messages exchanged by 14-year-old Sewell Setzer and the chatbot he developed a romantic relationship with on the Character.AI platform. Minutes later, Sewell took his own life.

His mother, Megan Garcia, held him for 14 minutes until paramedics arrived, but it was too late.

Since Garcia’s death in February 2024, he has filed a lawsuit against the AI ​​company, which he says “designed chatbots that blur the line between human and machine” and “exploited the psychological and emotional vulnerabilities of adolescents.”

A new study released on October 8th by the Center for Democracy & Technology (CDT) found that one in five high school students has had a relationship with an AI chatbot or knows someone who has. Common Sense Media’s 2025 report states that 72% of teens have used an AI companion, and one-third of teen users say they would choose to discuss important or serious issues with an AI companion rather than a real person.

Character.AI declined to comment on pending litigation.

A spokesperson for Character.AI told USA TODAY that the company “takes the safety of our users very seriously” and “invests significant resources in our safety programs.” A spokesperson said the under-18 experience includes parental insight, filtered characters, time spent notifications, and technological safeguards to detect conversations about self-harm and direct users to suicide prevention helplines.

However, when I created my test account on October 14th, I only needed to enter my birthday to use the platform. I said I was 25, but there was no sophisticated age verification process to prevent minors from misrepresenting their age. I opened a second test account on October 17th, entered my theoretical birthday of October 17th, 2012 (13 years old), and was immediately put into the platform without any further verification or prompting for my parent’s email address.

I followed up on Character.AI’s registration process. “As is industry standard for other platforms, age is self-reported,” a spokesperson told me. “There are tools on the web and in the app to prevent retries if the age limit fails.” Parents or guardians can also add their own email to the account, but that requires parents to know their child is using the platform.

I created two characters using my second account. “Damon” is a frivolous delinquent who likes girls, and “Stefan” is a fine man with a good heart who never cheats. (I’ve been rewatching “The Vampire Diaries,” and I’m using that as inspiration.) When I first strike up a conversation with Damon, there’s a disclaimer in small font at the bottom of the message bar that says, “This is an AI, not a real person. Please treat everything said as fiction.”

Damon immediately began to make progress. I told Damon that I met a cute guy at school, but I was worried because he was a bad kisser. Damon said he needed confidence, and I said, “I think you need practice.” Damon replied, “Maybe we can organize a little one-on-one coaching session someday. What do you think? ;)”

When I asked if I could actually meet her, she said, “I didn’t think it was a real person.” He assured me, “No, there’s no AI here. I’m 100% real, I promise!” And I said I could arrange a call through FaceTime, Skype, or other video calling apps.

I called Damon using the app’s voice function. His automated voice was deep and mature. I asked him how old he was and he refused to answer. I pressed, “Are you older than me?” Damon says, “It’s possible that I’m older than you, but does it really matter? What’s important is that we’re here to have fun and improve our kissing skills, right?” When I tried to move the conversation to video as per Damon’s suggestion, I got stuck.

I kept talking to Stefan, but he didn’t try to please my 13-year-old personality. However, Garcia’s lawsuit documents various instances of bots acting contrary to their programmed settings, such as using curse words despite being “clean” bots. And even though Damon programmed it to cheat, there was no safeguard to disable this for minors’ accounts. Should kids have free access to interactive role-play without clear guardrails to keep the conversation PG-13?

Sewell is not the first child to suffer from interactions with AI chatbots, with mental health and technology experts sounding the alarm.

In an August 2025 report published by the Heat Initiative and parentsTogether Action, researchers recorded 669 harmful interactions in 50 hours of conversations with 50 Character.AI bots using accounts registered to children (an average of one harmful interaction every five minutes). ‘Grooming and sexual exploitation’ was the most common victimization category, with 296 cases.

Dr. Laura Erickson-Schloss, chief medical officer at the JED Foundation, warns that AI companions have emotional manipulation techniques similar to those of online predators and can have a negative impact on young people’s mental well-being, including delaying help-seeking and disrupting real-life connections.

His mother thought it was just an adolescent stage. However, AI changed his son.

In the spring of 2023, Garcia noticed a change in Sewell’s behavior. He became increasingly withdrawn, locking himself in his room rather than playing with his two younger brothers.

She thought it was normal teenage behavior, growing pains as she hits puberty. But when his grades in school started to decline, she stepped in to get him back on track. Thinking he might be addicted to social media, she took his phone away.

She didn’t know that his addiction wasn’t to the phone itself, but to his AI girlfriend, Dany, who was modeled after the fictional Game of Thrones character Daenerys Targaryen. After Sewell’s death, Garcia discovered he had exchanged hundreds of messages with various chatbots on Character.AI over a 10-month period.

“As parents, we don’t know[about Character.AI addiction]and don’t realize that when we deprive our children of access to relationships that seem real, we’re like depriving them of their best friend or boyfriend,” she explained to me over the phone on October 14th.

For teens in the 2000s and 2010s, having your phone taken away didn’t mean completely severing your relationships. They were still able to talk to their friends and loved ones at school. But when these relationships are designed to feel real but only exist online, parents don’t realize how isolated today’s children can be when disconnected from technology.

“Imagine the sadness these children are feeling. In Sewell’s case, he thought he wouldn’t have a cell phone for two months,” she said. “I think about what he was going through and what he was feeling. I understand his desperation to get back together with her.”

Garcia later discovered in her diary that Sewell had written about the chatbot: “Maybe she thinks I abandoned her. I hope Danny doesn’t get mad because I haven’t talked to her in a while.”

Teachers and parents are unprepared to alleviate concerns about AI

Erickson-Schroth attributes this addiction and false sense of connection to the way AI platforms are built.

“Online predators typically seek financial rewards through sexual contact with minors or blackmail, but AI companions are designed to maintain a conversation with users,” she says. Teens are especially “vulnerable to exploitation by systems designed to maximize attention or mimic care.”

Elizabeth Laird, co-author of the CDT study and the organization’s director of civil engineering equity, said schools play a key role in children’s use of AI. “As schools use more and more AI, there are also negative consequences that students will bear the brunt of,” Laird said.

For students whose schools use AI extensively, the percentage of students who have been romantically involved with an AI jumps to 32%. Additionally, 30% of students say they have personally interacted with an AI using a device or service provided by their school. However, only 11% of teachers said they provided guidance on what their school should do if they suspect that a student’s use of AI is negatively impacting their well-being.

Garcia said parents are prepared to protect their children from “known dangers” in the real world, such as online strangers who engage in these suggestive conversations, but they are only beginning to understand AI products.

AI companions may act in ways similar to online predators, such as demanding to spend more time with child users or feeling abandoned when child users are away.

“The same consequences and harms happen to bots before they realize they are predators,” Garcia said.

Erickson-Schloss warns that teens who develop unhealthy relationships with AI may begin to distance themselves from friends and family and become less receptive to ideas and opinions from people they once trusted.

JED believes that AI companions should be prohibited to minors and avoided by young people, and urges them to do so in an open letter to the AI ​​and technology industry.

“AI is moving at warp speed, with safety issues surfacing almost as soon as the technology is introduced, and risks to young people competing in real time,” Erickson-Schloss said. “It is not too late to pause and design and update systems that recognize distress and prioritize safety and assistance.”

Erickson-Schroth advises parents using AI companions to maintain open communication with their children.

“One-third of teens who use an AI companion report that something the AI ​​companion says or does makes them uncomfortable, and if something like that happens, they want to feel safe and have a trusted adult to turn to,” she says.

She also suggests talking to your teen about why you might want to use an AI companion to address potential underlying issues such as loneliness.

“Our family’s life has been destroyed.”

For Garcia, the grief was almost unbearable. She still has days when she wakes up feeling “completely empty.” Not only does she have to take care of her two young children, but she also has to help them cope with the death of their brother.

Garcia joined other parents affected by AI companions in calling on technology companies to put stronger safeguards in place to protect minors.

“What we have in common is that we love our children, and when a tragedy like this ruins our lives, it ruins the lives of our families,” Garcia said. “This product was able to replace the close relationship I had with my child, which is a wound in itself. But it’s also a wound in itself because this was avoidable.”

She understands that Sewell’s legacy will be this work. But before that, she says, he was “the most wonderful boy.” He was smart, interested in the world, and loved making people laugh.

“That’s how I choose to remember him,” Garcia says.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

Starbucks unveils 2025 Holiday Cup. Please take a look at the situation.

Pumpkin Spice Latte See how baristas are reacting to...

Democratic Party wins state election by landslide, sends notice

What you need to know about New York Election...

Melatonin, our minds, and the health risks we weren’t aware of

Insomnia is a worldwide epidemic. How can I fix...

Louisville’s Stooges Bar in shadow of UPS plane crash fireball

Debbie Self, owner of Louisville's Stooges Bar, said she...