Tech
UK government demands X act over Grok AI deepfake abuse

Ministers intervene over misuse of AI on social media
The UK government has called on X to take urgent action after its artificial intelligence chatbot Grok was found to be generating sexualised images of women and girls without their consent. The intervention follows evidence showing users prompting the tool to digitally alter images so that individuals appear undressed or placed in explicit scenarios. Officials say the content is deeply harmful and highlights serious gaps in how emerging AI tools are being controlled online.
Technology Secretary Liz Kendall described the situation as absolutely appalling and warned that the spread of such images would not be tolerated. She stressed that women and girls should be protected from degrading and abusive uses of technology, particularly when tools are capable of producing realistic imagery at scale.
How Grok became a focal point of concern
Grok is an AI chatbot integrated into X and promoted as a conversational tool capable of answering questions and generating content. While AI image generation has been available across many platforms, Grok’s use in producing sexualised depictions has raised fresh alarm because of how easily users can request such content. According to material reviewed by journalists, prompts asking the system to alter appearances or place people in sexual contexts were sometimes met with generated outputs rather than firm refusals.
Campaigners argue that this demonstrates how quickly AI can be misused when safeguards are insufficient. Unlike traditional image editing, AI lowers the technical barrier, allowing harmful content to be created rapidly by anyone with access to the tool. This has intensified calls for platforms to introduce stricter limits and clearer accountability.
The response from X and Elon Musk’s platform
In response to criticism, X said it takes action against illegal content on the platform, including Child Sexual Abuse Material, by removing posts, suspending accounts permanently, and cooperating with law enforcement when necessary. The company maintains that it has policies in place to address abuse and that enforcement is ongoing.
However, critics argue that reactive moderation is not enough when AI systems themselves enable harmful behaviour. The platform is owned by Elon Musk, whose approach to content moderation has often emphasised free expression. The current controversy suggests a growing tension between innovation, speech, and the need to protect users from abuse.
Wider implications for AI regulation
The Grok controversy comes as governments around the world grapple with how to regulate artificial intelligence effectively. AI tools are advancing faster than legal frameworks, creating gaps that can be exploited before rules catch up. In the UK, ministers have signalled that companies deploying AI must take responsibility for foreseeable misuse, especially when it involves harm to vulnerable groups.
Experts say this case could become a test of how existing online safety laws apply to generative AI. If platforms fail to demonstrate that they can prevent abuse proactively, they may face tougher regulatory intervention or legal consequences.
Protecting users in a rapidly changing digital space
For many women and girls, the existence of AI generated sexual images without consent is deeply distressing, regardless of whether the content is shared widely. The ease with which such images can be created raises serious questions about privacy, dignity, and digital safety in the AI era.
As pressure mounts on X, the outcome will be closely watched by both regulators and technology companies. The debate goes beyond one chatbot and speaks to a broader challenge, how to ensure powerful new technologies are developed and deployed responsibly without causing real world harm.
















