Grok AI Row Forces UK to Activate Tough New Deepfake Laws

Grok AI Restricted After Outcry Over Sexualised Images of Women, Children

Elon Musk’s social media platform X has moved to restrict its artificial intelligence tool, Grok, from editing images of real people to place them in revealing clothing, marking a significant reversal after mounting political and regulatory pressure in the UK and the United States.

The company confirmed on Wednesday night that it had introduced new technical safeguards to prevent Grok from altering photographs of real individuals into sexualised or revealing images, such as bikinis. The restriction applies to all users, including paid subscribers.

X said the changes were part of a wider update to its global safety framework, following growing concern that the AI tool had been used to digitally “undress” women and children without consent.

The announcement came just hours after California’s attorney-general launched an investigation into the creation and spread of sexualised AI-generated images involving minors. Musk has repeatedly denied that Grok produces illegal content, insisting the tool operates within the laws of individual jurisdictions.

In the UK, Prime Minister Sir Keir Starmer said the government had been assured that X was now prepared to comply fully with British law on artificial intelligence and image-based abuse. Speaking during Prime Minister’s Questions, Starmer described the misuse of AI tools to generate degrading images as “disgusting and shameful” and warned that legislation would follow if platforms failed to act.

He told MPs that X had indicated it was taking steps to ensure compliance but stressed that regulators would continue to scrutinise the platform’s conduct. Ofcom has already opened an investigation into whether X has breached its duties under the Online Safety Act, with powers ranging from fines to blocking the service in the UK.

X later confirmed that in countries where editing images of people into revealing clothing is illegal — or becomes illegal — the feature will be geo-blocked across all Grok products. Jonathan Lewis, X’s UK managing director, said the platform had been restricted to prevent the editing of images of real people into revealing outfits, citing examples such as digitally placing individuals into bikinis.

While image creation and editing tools linked to Grok remain available to paid users, X said the new measures were designed to add accountability and an extra layer of protection. However, critics argue that placing such tools behind a paywall does little to address the underlying harm.

The row has sparked wider scrutiny of AI platforms beyond X. Investigations by journalists and campaigners suggest other tools, including ChatGPT, can also be prompted to generate images that simulate “digital undressing”, although rival platforms such as Google’s Gemini and Anthropic’s Claude reportedly block similar requests.

Labour MP Jess Asato, who has campaigned against nudification tools after becoming a victim herself, said the issue extends far beyond one company. She warned that the combination of AI image manipulation and social media distribution amplifies harm and makes abuse harder to trace.

In California, Attorney-General Rob Bonta said the spread of non-consensual sexualised AI images was “shocking”, while Governor Gavin Newsom accused xAI of creating what he described as a “breeding ground for predators”.

Musk and his supporters have pushed back strongly, claiming the regulatory action amounts to politically motivated censorship. The billionaire has insisted Grok will refuse to generate illegal material and that any abuse of the tool stems from user prompts rather than the system itself.

The UK government is expected to introduce new legislation this week making it a criminal offence to create non-consensual intimate images. While digitally generated bikini images are not automatically illegal under current law, legal experts say context and repeated sexualised prompts could bring such content within the scope of existing offences.

As regulators tighten oversight, the controversy has reignited debate over how fast-moving AI technologies should be governed — and whether platforms can be trusted to police themselves without firm legal boundaries.

Leave a Comment

Your email address will not be published. Required fields are marked *