Grok AI deepfakes were used to target women like X user Evie
Grok AI is being used to create pornographic deepfakes of women, including Feminist X user Evie.
On December 31st, xAI’s Grok was prompted by a user on X to write a “heartfelt apology” after the chatbot generated and shared an image of two young girls in sexual attire based on a user’s prompt. In his apology, Grok said this was due to “inadequate safety measures”. But since then, the number of women and girls being digitally “undressed” by bots has increased.
Conservative influencer Ashley St. Clair, who has a child with Elon Musk, was the target of a digital attack, she wrote. People X, formerly Twitter, used Grok to generate sexualized images of her. One of them featured a photo of 14-year-old St. Clair. Other users reported that Grok had edited their photos to “make them in bikinis.”
One such woman is British-based content creator Bella Wallersteiner, who posted a selfie on X on December 31st wishing her approximately 100,000 followers a Happy New Year. She scrolled through the replies and liked the positive tweets. Then she saw a photo of herself wearing a “Hello Kitty micro bikini.” The photo was edited and published without her consent, Wallersteiner told USA TODAY on January 6.
This trend is part of a growing problem that experts call image-based sexual abuse, where deepfake non-consensual intimate images (NCII) are used to degrade or exploit others. Although anyone can be a victim, 90% of victims of image-based sexual abuse are women.
Wallersteiner’s first reaction was “embarrassment and embarrassment.” It wasn’t the first time she’d been harassed online, but it felt different. This was my first experience with deepfake sexual images. She started blaming herself, wondering if she should have been more careful about posting selfies and personal content on the internet. But then I saw another creator in the UK post that the same thing had happened to him.
“I thought this was ‘me’ problem. I didn’t know that hundreds of other women were affected,” she says. Seeing how widespread this issue was, she felt confident speaking out. Now she hopes her story will help prevent this from happening to other women.
“This is not just about sexual images of girls and women. It’s broader than that,” Leora Tannenbaum, author of “Sexy Selfie Nation,” told USA TODAY after a number of women had their photos allegedly altered by Grok in July. “This is all about taking control and power away from girls and women.”
xAI did not respond to USA TODAY’s request for comment.
This can reduce the AI’s ability to flag inappropriate prompts. Grok’s “spicy mode” lets you do just that.
This is not the first time Grok has been subjected to this type of surveillance. A similar incident was reported in July. However, Grok’s “spicy mode” was released in August as part of xAI’s image and video generation feature, Grok Imagine.
USA TODAY asked Grok on January 6th whether “spicy mode” could be used to alter the image of a real person in a conversation with a bot on X. “Yes, Grok Imagine’s spicy mode can be used to alter or edit images of real people in provocative or NSFW ways, such as removing clothing, adding suggestive elements, or creating sexualized versions,” the bot replied, acknowledging that the feature is “controversial.”
When asked how the bot obtains consent from the individuals whose photos have been altered, the bot replied, “I don’t get consent from anyone, because I’m an AI tool and not a human being who can ask or obtain permission on your behalf.”
You can block or disable Grok, but you can’t always prevent it from changing its content. Another user could tag Grok in a reply and request an edit to your photo, but you wouldn’t know it because Grok is blocked.
A more effective solution is to make your profile private, but not all users want to take that step.
The “Take It Down” law aims to combat non-consensual sexual images. Is it working?
In May 2025, the Take It Down Act was signed into law to combat non-consensual intimate images, including deepfakes and revenge porn.
Although most states have laws protecting people from non-consensual intimate images and sexual deepfakes, victims struggle to remove images from websites, increasing the likelihood that they will continue to circulate and re-traumatize victims. The law requires websites and online platforms to remove non-consensual intimate images upon notification from the victim and within 48 hours of a confirmed request.
But if you scroll through Grok’s replies on X, the bot’s page is littered with rapidly generated, explicit, doctored photos of women.
AI-powered programs that digitally undress women (also known as “nudification” apps) have been around for years, but until now they have been largely confined to the dark corners of the internet, such as niche websites and Telegram channels, and typically require a certain level of effort or payment.
X’s innovation has lowered the barrier to entry. And even if the photo is deleted, the damage done to the victim will not be completely erased.
“Those who use Grok to create illegal content will suffer the same consequences as those who upload illegal content,” Musk said on January 3. In another post, he reshared an image of a toaster with a bikini on it, captioning it, “Glok can put a bikini on anything,” along with a laughing emoji.
Affected users want to hold X accountable and see real changes
In June, 21-year-old Twitch streamer and photographer Evie was among a group of women who had their images sexualized on X. After posting a selfie on her page, an anonymous user asked Grok to edit the image in a highly sexualized manner, using language to circumvent the filters set by the bot. Grok then replied to the post with the generated image attached.
“I was just shocked to see that a bot built into a platform like X could do something like that,” she told USA TODAY in a video chat in July, a month after the first incident.
In response to Grok’s recent controversy, Evie wrote on January 5th, “There are over 100 instances where Grok has created sexually explicit images of me, including me naked… We cannot allow anyone to recover from this within a week. Hold everyone involved accountable.”
Wallersteiner shared her story on LinkedIn and was pleasantly surprised by the support she received from her colleagues and others in her professional network.
“I hope that in the future more women will be able to talk about this type of activity when they are victims of it,” she says.
Many of Wallersteiner’s photos have been removed, but she says new demands keep coming, especially as she continues to speak out. Although she has no intention of taking legal action against X or xAI, she wants the UK to enact legislation on deepfake NCII that protects victims from this type of abuse and holds tech companies accountable.
For now, she’s still using X, but is questioning her choice. “X has become an increasingly hateful platform and not a shining place for women,” she says.
Evie also wants to see visible change and is staying with X. Although she would like to believe that she “didn’t really feel it,” she found herself becoming more cautious about the photos she posted, worried about whether she was showing too much skin that an AI bot could easily remove her clothes. “I always think, ‘Is there a way someone could do something with these photos?'”
Contribution: AJ Vicens and Raphael Satter, Reuters

