Ofcom is continuing with its investigation into X, despite the social media platform saying it will block Grok from digitally undressing people.
A spokesperson for the UK comms regulator said on Thursday: "X has said it's implemented measures to prevent the Grok account from being used to create intimate images of people.
"This is a welcome development. However, our formal investigation remains ongoing. We are working around the clock to progress this and get answers into what went wrong and what's being done to fix it."
The statement follows X confirming that it has "implemented technological measures" to prevent Grok from editing images of real people, making them appear as though they have fewer or no clothes.
Ofcom first made contact with X on January 5, following widespread reports that its AI chatbot, Grok, was being used to digitally undress images and generate sexualized depictions of real people – mainly women but also children.
A week later, the regulator opened a formal investigation into X to understand whether it had complied with the Online Safety Act.
On Wednesday evening, via its Safety account, X stated: "We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.
"We take action to remove high-priority violative content, including child sexual abuse material (CSAM) and non-consensual nudity, taking appropriate action against accounts that violate our X Rules. We also report accounts seeking child sexual exploitation materials to law enforcement authorities as necessary."
As well as blocking Grok from nudifying subjects, X has also implemented a geoblock on its chatbot's ability to generate images of people in bikinis, underwear, or similarly revealing clothes – known internally as "spicy mode" – where such content is restricted by law.
The Elon Musk-owned platform's first attempt at damage control was to merely limit Grok's nudifying capabilities to paid users only, having previously been available to any user registered on the site.
However, technology secretary Liz Kendall strongly rejected this move, calling it "an insult and totally unacceptable for Grok to still allow this if you're willing to pay for it."
X has now updated this, saying the restriction "applies to all users, including paid subscribers."
Kendall issued a fresh statement on Thursday, following X's latest announcement, encouraging Ofcom to investigate the company fully, despite the platform saying it has adhered to the government's request.
"I welcome this move from X, though I will expect the facts to be fully and robustly established by Ofcom's ongoing investigation," said Kendall.
"Our Online Safety Act is and always has been about keeping people safe on social media – especially children – and it has given us the tools to hold X to account in recent days.
"I also want to thank those who have spoken out against this abuse, above all the victims. I shall not rest until all social media platforms meet their legal duties and provide a service that is safe and age-appropriate to all users.
Rob Bonta, California's attorney general, also opened an investigation into X this week, urging it to take immediate action against the reports that it was nudifying women and children.
"The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking," he said.
"This material, which depicts women and children in nude and sexually explicit situations, has been used to harass people across the internet.
"I urge xAI to take immediate action to ensure this goes no further. We have zero tolerance for the AI-based creation and dissemination of non-consensual intimate images or of child sexual abuse material." ®
Source: The register