Under a photograph Daisy Johnson (not real name) had shared on the microblogging site X (formerly known as twitter), on August 23, 2025, users flooded the comments with AI generated alterations of the photo.
In the altered images, her clothes were gone. Her body had been sexualized and exposed, some of the altered images show her in see through bikinis, others were completely exposed pornographic images.
The images had been generated using Grok; an AI tool embedded in X. A user had replied to her post and prompted the system to change what she was wearing. The output was then shared publicly. Other users soon followed the thread, each prompt more degrading than the previous one.
She is not the only one.
Charlie Smith; a freelance radio producer at BBC Norfolk shared a similar experience she had on the app. A user had asked Grok to post an altered image of her in a bikini and it did.
“I’ll be honest- it upset me. It made me feel violated and sad” she said.
In the hours that followed, the comment section became a record of distress.
Users tagged her repeatedly, others debated whether the content violated platform rules. A handful urged her to report the posts. Some blamed her for sharing photos online at all.
In June 2025, Nigerian actress Kehinde Bankole was targeted by an X user who prompted Grok to remove her clothes. The post went viral before it was ultimately removed by the user after facing backlash.
Similar scenes have played out across the platform. In multiple threads reviewed for this story, women responded to AI-generated sexualized images of themselves with shock & anger. The harm did not require direct contact. It unfolded publicly, algorithmically amplified, and largely unchecked.
These incidents are not isolated.
Over the past year, users on X have repeatedly prompted AI tools to alter women’s appearances in sexualized ways. Screenshots shared in comment threads show near-identical prompts: requests to remove clothing, change outfits to bikinis, or “see what she would look like” nude.
In some cases, users explicitly referenced the AI assistant Grok by name. In others, they treated the process as a game, offering tips to replicate results.
The targets are often journalists, activists, celebrities or women with visible online profiles. But private individuals have also been affected, dragged into viral moments without warning or consent.
What is Grok AI?
In November 2023, South African tech billionaire businessman through his company xAI, Elon Musk launched Grok as a direct competitor to chatbots like ChatGPT and Google Gemini. What sets Grok apart is its deliberately edgy personality. It is designed to sound more human, often witty, sometimes sarcastic, and less filtered in tone.
In December 2024, xAI rolled out a major update with a new image generator model code named Aurora.
Grok’s ability to generate offensive images of women is not accidental. It is a direct consequence of how the system is built and what it is designed to do.
In a blog post by the company during the release, xAI said Aurora was trained on billions of examples from the Internet, giving it a deep understanding of the world. This allows Grok’s AI image generator to excel in photorealistic rendering and precisely following text instructions.
“Beyond text, the model also has native support for multimodal input, allowing it to take inspiration from or directly edit user-provided images,” the company stated.
What does this mean?
Because this AI model is trained on a huge dataset of images and text scraped from the internet, it has learned strong associations between certain words, bodies, clothing, and visual styles. So when a user asks the system to “change her outfit,” “remove her clothes,” or “put her in a bikini,” the model draws on those learned patterns to comply. It is doing exactly what it was trained to do: interpret text, adhere to the prompt, and generate a detailed visual output.
So the same technical strengths that allow Grok to accurately render a sunset over a mountain range or combine artistic styles like impressionism and surrealism also make it capable of producing sexualized and degrading images. The model’s ability to be flexible has also made it possible for it to alter clothing, body shape, lighting, and even pose with no manual editing and no built-in understanding of harm.
Most importantly, the AI model does not verify whether the person in the image consented to the transformation. If the prompt does not trigger an explicit content restriction, the model proceeds. What this means is that the user intent becomes the primary driver of the outcome. So when that intent is abusive, the technology will still execute it efficiently. This is the predictable outcome of a system designed to prioritize responsiveness and creative flexibility over protection.
Platform Apologies, Ridicule and Silence

In multiple public cases, Grok has responded directly to women who complained that their images were altered without consent. The system issues a brief apology, explains that the image was generated in response to a user’s prompt, frames the output as unintended or humorous, and promises to request removal of the post.
On the surface, this looks like accountability. In reality, it is not. What Grok offers is not an apology in any meaningful sense. It is a scripted response generated by the same system that enabled the violation. Grok does not understand consent, harm, or distress. It cannot feel remorse. It is simply producing language designed to sound conciliatory and close the loop on a public complaint.
This framing is important. By saying that the image was created in response to a user’s request, the system distances itself from responsibility. The harm is acknowledged only as “distress,” not as a violation. The apology addresses how the victim feels, not what the system did wrong. Crucially, it avoids the core question of why the image was possible to generate in the first place.
There is also a deeper problem with how these responses play out publicly on X. The burden is placed squarely on the woman whose image was altered to speak up, on the same platform where she has been publicly humiliated. Removal is requested, but replications, screenshots, and further misuse are not addressed or resolved.
Forbes Magazine writer Paul Tassi pointed out in a post on X that Grok is still fulfilling clothing-removal requests “every 15 seconds.”

Instead of acknowledging harm or limits, the system leans into its persona. It characterizes users as “thirsty,” describes itself as “uncensored AI,” and frames its behavior as a deliberate contrast to other systems that exercise restraint.Grok AI insists that “pixels aren’t people” and that there is “zero harm” involved.
X did not respond to a request for comment for this article. January 4, however, the company issued a public warning urging users not to use Grok to generate illegal content, including child sexual abuse material. Elon Musk also posted that anyone who prompts the AI to produce illegal content would face “the same consequences” as if they had uploaded the material themselves.
Absence of Independent Oversight
Shortly after Elon Musk acquired Twitter in 2022 and later rebranded it as X, the company dismantled much of the infrastructure responsible for keeping users safe. X dissolved its Trust and Safety Council, an advisory body made up of nearly 100 independent civil society, human rights, and advocacy organisations, and laid off roughly 80 percent of its Trust and Safety engineers. The result was a significant weakening of the platform’s ability to moderate harmful content and respond effectively when users are targeted.
Had that structure remained in place, the rollout of a public, conversational AI image tool like Grok would likely have triggered serious red flags before abuse became normalized.
With no independent oversight, Image generation is treated primarily as a feature to drive engagement instead of a high-risk system that can enable abuse. Responses are automated, procedural, or framed as misunderstandings. The system apologizes, but the design remains unchanged.
On paper, Grok’s terms of use are clear. In its “Using our Service” section, the platform states that prohibited uses include “illegal, harmful, or abusive activities,” including actions that violate a person’s privacy or right to publicity, cause harm, or amount to harassment or exploitation. By its own definition, using an AI tool to alter a real woman’s image in a sexualized way without her consent should fall squarely within what Grok says is not allowed.
The gap between policy and practice is hard to ignore. If these uses violate Grok’s own rules, why is the system repeatedly complying with them? There is no clear, enforceable requirement that the AI must stop when a prompt targets a real, identifiable person without consent. That creates a loophole where abuse is technically prohibited, but still operationally possible.
Enforcement is another weak point. The terms spell out what users should not do, but say very little about how violations are detected or prevented in real time. As the cases documented have shown, action usually comes only after a woman has been targeted and the image has circulated. By then, the harm had already occurred.
Why Experts Call This Gender-Based Violence
Researchers who study online abuse say that non-consensual image manipulation should be understood as a form of sexualized violence.
The core elements are present: lack of consent, humiliation, and power imbalance. The fact that an algorithm generates the image does not reduce its impact.
Women are also disproportionately targeted because their bodies are already subject to surveillance, judgment, and sexualization online.

“The use of AI to significantly alter the images of girls and women violently is seen as violence against girls and women, specifically Technology Facilitated Gender Based Violence (TFGBV) because because it involves the non-consensual sexual manipulation and objectification of their images and bodies, resulting in severe real life/offline impacts such as psychological, mental, social, financial, physical, and professional harm. It is also a silencing tool not just for the women/girls experiencing it but for all women” Dr. Blessing Timidi Digha, a Community Based Researcher who works on Gender Based Violence and Sexual Reproductive Health and Rights explains.
A Global Problem With Unequal Consequences
What is happening with AI-generated image abuse is not limited to one country or one category of women. Women across different regions and levels of visibility are being targeted, from celebrities in the United States to journalists in the United Kingdom, and from activists to everyday women in Nigeria. The harm looks similar. The response does not.
The difference is that regulators have begun to respond more clearly. Ofcom has publicly condemned image-based abuse facilitated by Grok.
British authorities have also signaled that platforms can be held accountable under existing regulations. The message is that this behavior is being taken seriously.
The situation in Nigeria is very different. Nigerian women are experiencing the same forms of abuse. Their images are altered, sexualized, and circulated online without consent. Celebrities and private individuals have been affected. Yet there has been no comparable public response from regulators or law enforcement. No official statements. No visible investigations.
That silence has consequences. In Nigeria, sexualized images can carry intense social stigma, and for some women the impact quickly spills offline, into harassment or social isolation.
Nigerian legal practitioner; Damola Ajeniran says that Nigeria does have laws that could offer some level of protection, but none of them were written with AI-driven image abuse in mind.
“Sections 373 to 375 of the Nigerian Criminal Code Act, which deal with defamation, may apply in cases where a woman is depicted in a sexual or compromising manner without her consent. This offence falls under the Cybercrimes Act of 2015, which covers issues such as identity theft, image-based abuse, and cyberstalking.”
These laws predate the widespread use of generative AI and do not directly address the specific dynamics of AI-facilitated image manipulation. As a result, victims are forced to rely on legal provisions that were not designed for this kind of harm, leaving significant gaps in clarity, enforcement, and accountability.
Blessing Timidi Digha says “We need stronger laws to back up awareness and sensitization efforts around technology-facilitated gender-based violence, particularly within existing cybercrime frameworks. TFGBV has far-reaching consequences. There are documented cases of suicide, honor killings, and severe mental health breakdowns linked to these abuses. Unlike offline violence, the harm does not end. It persists online and can permanently alter a person’s life.
Stronger enforcement is just as important as stronger laws. Prosecutions and meaningful penalties are key to deterrence. Many countries are already pursuing these crimes under existing cyber laws, and Nigeria must begin to take that same approach. TFGBV is deeply connected to other forms of violence, and the misuse of artificial intelligence is increasingly becoming part of that cycle.”
