New research shows that children are asking their AI peers to solve problems. Is this the problem?

Date:

Editor’s Note: Kara Alaimo is an associate professor of communications at Fairleigh Dickinson University. Her book “Influence: Why Social Media is Toxic to Women and Girls – And How We Get It Back” was published in 2024 by Alcove Press. Follow her on Instagram, Facebook and Bluesky.

When two of James Johnson Billund’s friends got caught up in a debate earlier this year, he didn’t know what to do. So, the 16-year-old asked his fellow AI team for advice.

The AI companion is a digital character who helps parents and teachers instill critical thinking skills in their children, and is a digital character who talks to users, according to Common Sense Media, a San Francisco-based nonprofit.

The chatbot told Johnson Billund, who lives in Philadelphia, to separate his friends. He did, and it solved the immediate problem, he said. But “Now they don’t talk much.”

The experience, he said, showed him that his fellow AI “can’t find any deeper problems.” “I’m scared to ask them deep, fundamental questions.”

Another thing that hit Johnson Billund was that his AI companions always seemed to agree with him and tell him what he wanted to hear. And he found a way for them to tell them that they are creepy resemblance to humans. At one point, he said, “I forgot that he wasn’t actually my friend,” he said, “but I was talking to fellow AI members.”

New research suggests that other teenagers experience the same thing.

A survey conducted by Common Sense Media this year about over 1,000 people aged 13 to 17 years old, shows that the majority of teenagers use AI companions. Research shows that over half of teens use them regularly, with a third turning to them due to relationships and social interactions.

Furthermore, 31% of teens have a conversation with their AI peers as satisfying or satisfying as they have with others, with 33% discussing serious and important issues with their AI peers rather than with other people.

The findings shed new light on the relationships teenagers are developing with AI tools.

According to Common Sense Media, almost three-quarters of teens use AI peers.

“We are committed to providing research and research at Common Sense Media,” said Michael Robb, Head of Research and Research. “We don’t want our children to feel confident or should go to AI peers instead of children, parents, or qualified professionals,” particularly when we need help with serious issues.

Furthermore, AI peers cannot model healthy relationships. “In the real world, there are all sorts of social clues that children have to learn to interpret, get used to, and react,” Rob pointed out. However, children can’t learn to pick up things like body language from chatbots.

Chatbots are similar, Rob said. “They want to please you. They don’t put a lot of friction on the ways people in the real world may be.” If users are used to AI companions, they “are not ready when they encounter friction or difficulties in real-world interactions,” he said.

While AI peers may seem authentic, they can make children feel temporarily lonely when they interact with them, he said. However, it can reduce their human interactions and leave them alone for a long period of time.

“The interaction with characters on our site should be interactive and entertaining, but it’s important for users to remember that the characters are not real people,” said Chelsea Harrison, director of communications at popular AI companion Charition.ai. She said she couldn’t comment on the report because she hadn’t seen it yet.

The company tries to find a safe space and offers a disclaimer that the characters are not authentic and offer individual versions to users under the age of 18 designed to minimize the content of self-harm.

Another cause of concern is that 24% of teens say they share personal information with their AI peers. When children share personal struggles with their AI peers, they may not realize that they are sharing that data with companies rather than friends.

Additionally, “We often recognize these companies with very broad and lasting rights to your personal information that they can use the way they want,” Rob said. “They can change it. They can save it. They can view it. They can incorporate it into something else.”

Robb said the limitation of the study was that it was carried out at a single point in time, but people’s use of technology continues to change. He also said teens may have overreported behaviors they thought were desirable, such as using chatbots in a healthy way.

Thankfully, there are things parents can do to protect their children.

Rob said parents should start by talking to teens about their AI peers “without judgment.” “Have you ever used an app that you can talk to or create with AI friends or partners?” Before you jump into concerns, listen to learn what appeals to your teens about these tools, he suggested.

I would then recommend pointing out that “AI peers are comfortable and programmed to validate,” and discussing why it’s a concern, Rob said. Teens said, “It’s not just how real relationships work, as real friends sometimes oppose us.

Having conversations like this helps children learn to think about AI more widely in a healthy way, Rob said.

One reason I wasn’t surprised is that many teenagers use their AI peers as friends. This is because I saw in my own research how social media undermined children’s sense of friendship.

These days, children are less likely to come in person with friends than in past generations. Often, I think it’s a way to maintain a relationship, such as commenting on someone’s posts. As a result, there is less practice with offline human interacting ions.

One of the best things we can do is encourage children to come together in person with friends and other peers.

“Many of our joys in our real-life friendships are these close connections that allow us to see each other and understand each other without a word,” said Justin Carino, a New York-based Westchester, who treats young people and has not been involved in research.

“Our crashes walk in the classroom,” she said. “The teacher says something crazy. You make eye contact with your best friend. There are these nuances that we learn to communicate intimately with people near us.

When it comes to AI peers who mimic friends, the best thing parents can do is to keep teens from using them at all, Rob said.

In a Common Sense Media risk test, he said, AI showed children inappropriate content, such as sexual material. Additionally, “They were engaged in some stereotypes that were not great. They sometimes offered dangerous advice.”

Representative of Meta., He declined to comment.

Thirty-four percent of teens in the survey said they were uncomfortable with what their AI peers did or said, but Rob pointed out that teens may receive information that doesn’t bother them, but their parents didn’t want to see or hear them.

I certainly wouldn’t allow children to use AI peers before they turn 18 unless there is a way to change radically. I agree that these companies are not doing enough to protect children from harmful content and data harvesting. And she hopes her daughters will develop relationships with humans rather than with technology.

If your teens are using AI peers, it is important to monitor for signs of unhealthy use, Rob said. If teens prefer to interact with AI over humans, they suffer when they spend hours interacting with AI peers, or can use them, or withdraw from family and activities they enjoyed. These are classic signs of the problem.

In that case, we recommend seeking help from your school’s guidance counselor or other mental health professional.

It’s also important to show examples of how parents have a healthy relationship with technology, Rob said. “Show your teens what balanced technology looks like,” he said. “Take these open conversations about how you handle your emotional needs without resorting to digital solutions alone.”

This new study showing that most teens use AI peers shows why it is important to tell young people about why they need real friends rather than chatbots to validate them. Technology cannot replace humans, but it explains why Johnson Billund’s friends are no longer around.

Inspired by the weekly roundups on living well, which have become simple. Sign up for CNN’s Life, but a better newsletter about information and tools designed to improve your happiness.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

Nancy Guthrie’s family asks neighbors for clues in new statement

"Members of this community may have information that they...

Powerball jackpot rises to $120 million for March 21 drawing

Check out the luckiest states in the lotteryUSA TODAY's...

March Madness 2026 NCAA Tournament First Round Worst Moments

Duke and Michigan headline Saturday's March Madness Round 2...

Taylor Frankie Paul’s ‘Bachelorette’ will not air. What comes next?

ABC cancels Taylor Frankie Paul's 'Bachelorette' seasonTaylor Frankie Paul's...