Tech
Ofcom Probes X Over Grok AI as Deepfake Abuse Triggers Regulatory Alarm

The UK’s media and communications regulator has launched a formal investigation into X, the social media platform owned by Elon Musk, over serious concerns that its artificial intelligence tool Grok is being used to generate sexualised deepfake images. The move marks a significant escalation in regulatory scrutiny of AI driven content and signals that tolerance for platform inaction is rapidly diminishing.
Why Ofcom has stepped in now
Ofcom said its investigation was prompted by “deeply concerning reports” that Grok had been used to create and share images depicting people in states of undress without their consent. Even more alarming were claims that the tool had been used to generate sexualised images of children.
These allegations strike at the core of the UK’s online safety framework, which places heightened responsibility on platforms to prevent the spread of illegal and harmful content. Ofcom’s decision to intervene reflects growing frustration among regulators that voluntary safeguards have failed to keep pace with the risks posed by generative AI.
What the investigation could mean for X
If Ofcom concludes that X has breached UK online safety laws, the consequences could be severe. The regulator has the power to impose fines of up to 10 percent of a company’s global revenue or £18 million, whichever is greater. For a platform the size of X, such penalties would be both financially significant and reputationally damaging.
Beyond fines, enforcement action could also require changes to how Grok operates within the UK, including tighter controls, restrictions on image generation features or enhanced moderation obligations.
Grok and the mechanics of AI enabled harm
Grok is an AI chatbot integrated into X that allows users to generate text and images based on prompts. Critics argue that its image generation capabilities have made it easier to produce convincing deepfakes, including explicit imagery involving real individuals.
The concern is not simply misuse by bad actors, but whether the system’s design made such misuse foreseeable. Regulators increasingly expect AI developers and platforms to anticipate how tools could be abused and to build safeguards accordingly.
X responds with a responsibility shift
In response to the investigation, X referred media inquiries to a statement posted by its Safety account earlier this year. The statement asserted that anyone using or prompting Grok to create illegal content would face the same consequences as users who upload illegal material directly.
This position places responsibility primarily on users rather than the platform itself. However, regulators and critics argue that such an approach is insufficient when tools can generate harmful content at scale and speed, especially when victims may have no awareness that images of them exist.
The wider legal and political context
The investigation comes as the UK prepares to bring into force new legislation that criminalises the creation of non consensual intimate images, including those generated by AI. Together, the law and Ofcom’s probe represent a coordinated push to close regulatory gaps that have allowed deepfake abuse to flourish.
The timing also reflects international momentum. Countries in Southeast Asia have already blocked Grok, and debates over AI generated sexual content are intensifying across Europe and North America.
Why women and children are central to the case
Deepfake abuse disproportionately targets women, particularly those in public life, and presents severe risks when children are involved. The creation of sexualised images, even if fabricated, can cause lasting psychological harm and reputational damage.
Ofcom’s language emphasising children signals that tolerance for failure in this area is effectively zero. Platforms are expected to treat such risks as existential, not incidental.
A turning point for AI platform accountability
This investigation represents more than a single regulatory action. It is part of a broader shift toward holding platforms accountable not only for content hosted, but for the capabilities they deploy.
For X and other AI driven platforms, the message is clear. Innovation without robust safeguards is no longer defensible. As regulators move from warning to enforcement, the Grok case may become a defining precedent in how democratic societies govern generative AI.















