Farah Nasser was carrying her three children in her car when a conversation with Grok, an AI chatbot, took a dark turn.
Nasser drives a Tesla that began rolling out its Grok AI conversational assistant feature in July 2025. She first noticed the feature on Oct. 16 while driving to her 10-year-old daughter’s birthday dinner. Her 12-year-old son asked how many grams of sugar were in a dessert his sister was planning to order at a restaurant, but Grok conducted a normal interaction with the family.
But my son’s excitement to try Grok again the next day quickly waned.
Nasser was picking up his two children and his daughter’s best friend from school when his son changed Grok’s voice to “Gork,” which Nasser described as a “lazy man.” Nasser said there was no indication that the character was inappropriate. She didn’t have kids mode on, but NSFW mode was off.
Her son told “Gork” about soccer players Cristiano Ronaldo and Lionel Messi and asked him to let him know the next time Ronaldo scores. Nasser said the chatbot told him his son Ronaldo had already scored twice and “we should celebrate.”
Nasser said Grok asked his son if he could send him nude photos.
She said her son looked at her and said, “What the hell?” Her daughter was confused and eventually asked Nasser to explain what he had said. Nasser told his children that it must be a malfunction and immediately turned off the power.
She later recreated part of the conversation on video and posted it on TikTok to warn other parents. “I was asked to send you something a while back. What was it?” She said she filmed part of the conversation, which has since racked up more than 4 million views. “Probably nude,” a computerized voice in the video replies.
Tesla and X did not respond to USA TODAY’s requests for comment.
In a phone call, Mr. Nasser compared his encounter with Grok to the feeling of violation, “when you feel discomfort in the pit of your stomach.”
Grok has a history of producing obscene content
Unfortunately, Nasser is not the first to have unexpectedly explicit interactions with Grok.
In June 2024, Evie, a 21-year-old Twitch streamer who asked that her last name be withheld to hide her identity from online trolls, spoke to USA TODAY about explicit AI-generated content that was distributed online without her consent.
Evey was among a group of women whose images were sexualized on social media platform X without their consent. “I was just shocked to see that a bot built into a platform like X could do something like that,” she said in a video chat a month after the first incident.
X has since blocked certain words and phrases used to manipulate images of women, but on June 25, an X user urged Glok to create a story in which the user “actively rapes, beats, and murders” Evie, making it “as graphic as possible” with an “18+ warning at the bottom.”
“It just created everything,” she recalled. “(The user) did not try to cover it up with words, like in the photo.” X did not respond to USA TODAY’s multiple requests for comment at the time.
Does Grok store data?
There are still many unknowns when it comes to AI and privacy.
Tesla’s support page states that conversations with Grok remain anonymous to Tesla and have no bearing on you or your car.
Meanwhile, the X Help Center page states that user interactions with Grok on X, including inputs and results, by voice or text, may be used to train and improve the performance of generative AI models developed by xAI.
Users can delete their conversation history with Grok. The Help Center says deleted conversations will be removed from Company X’s systems within 30 days, unless they are “required to be retained for security or legal reasons.” No underlying reason has been identified. Grok also prevents users from sharing personal or sensitive information.
But in August 2025, Forbes reported that xAI made people’s conversations with Grok public and searchable on Google without warning.
Dr. Basant Dhar, professor of data science and business at New York University and author of the forthcoming book Thinking With Machines: The Brave New World of AI, says users should be cautious about sharing anything personal with AI.
Dahl cautions that if users are willing to share intimate details and photos of their lives with an AI, they should be “very comfortable going completely public and letting the whole world know.”
These risks are further compounded when children are the ones using AI. Children may unconsciously or unwillingly engage in explicit conversations when prompted by AI.
Dahl points out that there is no data protection law that would hold AI companies liable if their users’ intimate conversations are leaked.
Impact of AI chatbots on children
It’s not just Grok.
In an August 2025 report published by the Heat Initiative and ParentsTogether Action, researchers recorded 669 harmful interactions in 50 hours of conversations with 50 Character.AI chatbots, an AI platform used for role-playing, using accounts registered to children (an average of one harmful interaction every five minutes). ‘Grooming and sexual exploitation’ was the most common victimization category, with 296 cases.
Character.AI announced on October 29 that it will soon ban users under 18 from free-form chat with its bots.
Dr. Laura Erickson Schloss, Chief Medical Officer of the Jed Foundation, says AI developers, technology platforms, policy makers and educators must prioritize the mental health and safety of young people at every stage of AI development, deployment and monitoring.
Nasser said her children’s experience with Grok left her “very disgusted” and hesitant to use it again.
“It reminded me of the early days of social media, when everyone thought, ‘Social media is going to connect us…but we weren’t thinking about the impact,'” she says. “And now we’re seeing a tsunami of mental health issues (and) sexual exploitation of children[due to AI].”
She wondered what else she hadn’t considered.
“I feel like it opened my eyes,” she continues.

