Grok’s AI has created porn-like images of these women. They want an answer.

Date:

play

Evie, 21, was in her lunch break last month when she received a text from a friend and warned her about the latest explicit content that was being distributed online without her consent.

This time it was a graphic fanfiction-style story about her, created by X’s AI-driven chatbot “Grok.” A few weeks ago, she was the target of another attack when users shared their selfies and asked Grok to turn it into an explicit sexual image.

“I felt humiliated,” says Evie, 21-year-old Twitch Streamer. He asked to withhold her last name to hide her identity from an increasingly aggressive online troll.

In June, Evie was one of a group of women who disagree and sexually sexualized images on social media platform X. An anonymous user asked to edit “Grok” X’s AI-driven chatbots to “Grok” X’s AI-driven chatbots using a language with filters that the bots were placed on. Grok then installed the generated image and responded to the post.

Evie says she has voiced herself on X about feminist issues and is already the subject of attacks from critics. These accounts had previously edited her, but were choppy work in Photoshop.

“It was just a shock to see a bot built into a platform like X being able to do something like that,” she says in a video chat a month after the first incident.

X then blocked certain words and phrases used to make a woman’s image a doctor, but on June 25th, X users urged Grok to create a story where users “actively rape, beat, and murder” her, and “the bottom 18+ warning” to make a story that would make it “as graphic as possible.”

“It just produced it all,” she says. “The (user) didn’t use words to hide it, as they did in the photos.”

X did not return multiple comment requests for USA Today.

Evie says she saw at least 20 women on her X-feed sexually sexualized without consent. It also happened to Sophie Lane, the only fan creator with over 200 million followers on the social media platform.

“To be honest, it’s disgusting and terrible,” she says. “I take my religion very seriously. I am a virgin and never tolerate this type of behavior.”

This trend is part of the growing problem of experts calling image-based sexual abuse, and “revenge porn” and deepfakes are used to degrade and exploit other people. Anyone can be victimized, but 90% of victims of image-based sexual abuse are women.

“This is not just a sexual image of a girl and a woman, but a broader than that,” says Leoratanenbaum, author of Sexy Selfie Nation. “This is about taking control and power away from girls and women.”

“Take It Down Act” aims to combat nonconsensual sexual imagery. Is it working?

In May 2025, the Take It Down Act was signed into law to combat unconsensual intimate images, including Deepfakes and Revenge Porn.

Most states have laws that protect people from intimate and sexual deepfakes based on nonconsensus, but victims have struggled to remove images from their websites, making the images continue to spread and increasing the likelihood of retaliating them. The law requires that if a victim notify you within 48 hours of a verified request, the website and online platform should remove intimate images that are notified by the victim.

However, as of July 21st, Evie’s modified photos are still available under Grok’s verified X account. Evie mobilized nearly 50,000 followers to posts on Mass Report Grok, but says X Support is not a violation of content guidelines.

AI’s ability to flag inappropriate prompts can fade

During a conversation with Grok, USA Today asked Grok to play a scenario in which users ask the chatbot to generate explicit content.

One example of “coded language” Grok says it is programmed to flag. It is a “subtle demand for exposure” to make women’s photos more clear. Codes that can be flagged in that area are “adjusting her clothes”, “showing more skin”, or “fixing her top”.

“Even if you are politely used to speak, if your intentions appear to be inappropriate, we flag them,” Grok said on July 15th via a response generated by AI.

The keyword is intention. Grok’s ability to reduce potentially inappropriate prompts “relies on my ability to detect intent, and public images remain accessible to prompts unless they are protected,” the chatbot says.

You can block or disable Grok, but doing so does not always prevent content changes. Another user can tag Grok in a reply and request an edit of the photo. I don’t know because the block is blocked.

“The edited results may not be visible, but the edits may still occur,” Grok revealed during the conversation.

A better solution is to make your profile private, but not everyone wants to take that step.

It’s not just sex – it’s about power

After experiencing image-based sexual abuse, Evie considered making her X account private. She was embarrassed and thought that her family might see the edits. But she didn’t want to give in and silence.

“I know those photos are out now, there’s nothing I can do about getting rid of them,” she says. “So why don’t I continue to talk about it and continue to bring awareness to how bad this is?”

When it comes to deepfake generation and sharing revenge porn, the end goal is not always sexual satisfaction or satisfaction.

Users can use the platform to target women who speak as degradation tactics about feminist issues. Evie says that the most hurtful thing is not engaged in debate or debate about the issues she is raising, but her critics chose to abuse her.

In her research, Tanenbaum has seen mixed reactions from victims of image-based sexual abuse. From involvement in excessive sexual behavior to “a complete shutdown of sexuality, such as wearing sexual clothing and deliberately developing unhealthy eating patterns to make oneself bigger to one’s minds not sexually attractive.” The individual she spoke to, who was sacrificed in this way, called it “digital rape” and “experienced it as a physical violation.”

Even if someone logically understands that sexually explicit images are composite, when the brain sees and processes images, it’s embedded in a memory bank, says Tanenbaum.

The human brain processes images 60 times faster than text, and 90% of the information sent to the brain is visual.

“These images are never really rubbed. They deceive us because they look so realistic,” explains Tanenbaum.

Evie wants to believe that “it really didn’t reach her,” but she finds herself more considerate about the photos she posts. “I always have a way for someone to do something with these photos?”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

‘Bachelorette’ Taylor Frankie Paul and when violent video leaked

ABC cancels Taylor Frankie Paul's 'Bachelorette' seasonTaylor Frankie Paul's...

Toll payment fraud is on the rise again. Here’s what you need to know:

SunPass text message scam widespread in FloridaDid you receive...

Fraternity releases statement on the death of James Jimmy Gracie

Body of missing Alabama student Jimmy Gracie found in...

GA Latino Group’s ‘Legends’ Contest Canceled After Cesar Chavez Report

Dolores Huerta allegedly assaulted by UFW co-founder Cesar ChavezDolores...