On X, a woman posts a photo in a sari, and within minutes, various users are underneath the post tagging Grok to strip her down to a bikini. It is a shocking violation of privacy, but now a familiar and commonplace practice. Between June 2025 and January 2026, I documented 565 instances of users requesting Grok to create nonconsensual intimate imagery. Of these, 389 were requested in just one day.
Last Friday, after a backlash against the platform’s ability to create such nonconsensual sexual images, X announced that Grok’s AI image generation feature would only be available to subscribers. Reports suggest that the bot now no longer responds to prompts to generate images of women in bikinis (although apparently will still do so for requests about men).
But as the technology secretary, Liz Kendall, rightly states, this action “does not go anywhere near far enough”. Kendall has announced that creating nonconsensual intimate images will become a criminal offence this week, and that she will criminalise the supply of nudification apps. This is appropriate, given X’s weak response. Placing the feature behind a paywall means that the platform can more directly profit off the online dehumanisation and sexual harassment of women and minors. And stopping the “bikini” responses after public censure and the threat of legislation is the least X can do – the bigger question is why it was even possible in the first place.
These measures are a step forward. The shadow technology secretary, Julia Lopez, suggested in her response that the government was overreacting, that this was just “a modern-day iteration of an old problem”, no different from crude drawings or Photoshop. She’s wrong. The scale is different. The accessibility is different. The speed is different. With Photoshop, there is a technical skill required, as well as direct publication by a user, which places all actions except platform provision squarely on them. In this case, though, the user makes a regular text reply with a request, and Grok generates and publishes criminal abuse to a massive audience.
Kendall’s approach criminalises users who create or alter these images, and the companies that supply dedicated nudification tools. That’s where it misses the point. Grok and most prominent image-generation tools are not dedicated nudification tools. They are general-purpose AI with weak safeguards. Kendall is not asking platforms to implement proactive detection. The law waits for harm to happen, then punishes.
The drawbacks of this approach are obvious. I observed this material being generated for months before the mainstream backlash began. These are harmful images that were generated that still exist, and perhaps were saved and shared across other platforms. For the victims of this AI sexual abuse material, regulation after the fact won’t help. For harm that is structurally amplified in this manner, the approach must be preventive, not reactionary.
Another more fundamental problem is that while the UK pushes AI safety regulation, the US is moving in the opposite direction. The Trump administration wants to “enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI”. Under this framework, there is little incentive for American AI companies to regulate misuse of their products. This matters because AI regulation is incomplete without cross-border collaboration. Kendall can criminalise users in the UK, she can threaten to ban X entirely. But she cannot stop Grok from being programmed in San Francisco. She cannot force OpenAI or Anthropic or any other US company to prioritise safety over speed. Without US cooperation, we are trying to regulate a transnational technology with national laws.
While this wrangling over regulation and updating policy plays out, many victims, and other women online, will be wondering what this new era of AI-enabled online sexual harassment means for them, and questioning their participation on global social media platforms. If my image has been digitally altered, how do I get justice if the perpetrator is halfway across the world? Transparency in the practices of AI companies is in decline – so how could the same companies be trusted to be accountable and audit systems that reproduce harm?
The truth is that these companies cannot be trusted. This is why globally, regulation needs to shift from “remove harm when you find it” to “prove that your system prevents harm”. We must code power into the process by requiring mandatory input filtering, independent audits and licensing conditions that make prevention a legal technical requirement. This means it may catch the harm before it materialises, enabling regulation to minimise harmful behaviour by these AI companies before their products are deployed. This is the type of work that we at the AI Accountability Lab in the Adapt Centre at Trinity College Dublin are pushing forward through our research.
Regulation after the fact is better than nothing. However, it offers little to the victims who have already been harmed, and sidesteps the conspicuous absence of law enforcement in addressing these platform harms.
-
Nana Nwachukwu is an AI governance expert and a PhD researcher at Trinity College Dublin

3 hours ago
6

















































