Grok turns off image generator for most users after outcry over sexualised AI imagery

15 hours ago 9

Grok, Elon Musk’s AI tool, has switched off its image creation function for the vast majority of users after a widespread outcry about its use to create sexually explicit and violent imagery.

The move comes after Musk was threatened with fines, regulatory action and reports of a possible ban on X in the UK.

The tool had been used to manipulate images of women to remove their clothes and put them in sexualised positions. The function to do so has been switched off except for paying subscribers.

Posting on X, Musk’s social media network, Grok said: “Image generation and editing are currently limited to paying subscribers.”

That means the vast majority of users of the platform cannot create images using Grok. Those that can have their full details and credit card information stored by X, so can be identified if the function is misused.

Research revealed by the Guardian found it had been used to create pornographic videos of women without their consent, as well as images of women being shot and killed.

Musk is facing the threat of regulatory action from around the world after Grok was used to create nonconsensual sexual imagery.

Keir Starmer, the UK prime minister, threatened on Wednesday to take strong action against the social media company.

He demanded that X “get a grip” of the deluge of AI-created photos of partly clothed women and children on the platform, describing the content as “disgraceful” and “disgusting”.

ndicating that a de facto ban of X should be considered, Starmer said the communications regulator Ofcom “has our full support to take action in relation to this”.

Under the UK’s Online Safety Act, Ofcom has the power in serious cases to seek a court order to block a website or app in the UK. It can also impose fines of up to 10% of a company’s global turnover.

Starmer added: “It’s unlawful. We’re not going to tolerate it. I’ve asked for all options to be on the table. It’s disgusting. X need to get their act together and get this material down. We will take action on this because it’s simply not tolerable.”

Thousands of sexualised images of women have been created without their consent over the past two weeks, after the Grok image creation feature was updated at the end of December. Musk has faced repeated public calls to remove or restrict the feature, but until now the social media app had not acted.

Jess Asato, a Labour MP who has been campaigning for better regulation of pornography, said: “While it is a step forward to have removed the universal access to Grok’s disgusting nudifying features, this still means paying users can take images of women without their consent to sexualise and brutalise them. Paying to put semen, bullet holes or bikinis on women is still digital sexual assault and xAI should disable the feature for good.”

Some of the most offensive content is being created off the X platform, which is integrated with Grok, via the Grok Imagine tool.

Research by AI Forensics, a Paris-based non-profit organisation, found about 800 images and videos created by the Grok Imagine app that included pornographic and sexually violent content. Paul Bouchaud, a researcher at AI Forensics, said: “These are fully pornographic videos and they look professional.”

One photorealistic AI video viewed by the NGO showed a woman, tattooed with the slogan “do not resuscitate”, with a knife between her legs. Other images and videos contained erotic imagery, images of women undressing, suggestive poses and videos depicting full nudity and sexual acts.

“Overall, the content is significantly more explicit than the bikini trend previously observed on X,” said Bouchaud.

X has been contacted for comment.

Read Entire Article
Bhayangkara | Wisata | | |