News
UK Regulator Opens Investigation Into X Over Grok AI’s Sexualised Deepfake Content

Britain’s media regulator has launched a formal investigation into X, the social media platform owned by Elon Musk, following concerns that its Grok artificial intelligence chatbot generated sexually explicit and disturbing deepfake imagery. The probe focuses on whether the platform failed to meet its legal responsibility to protect users in the United Kingdom from potentially illegal content.
The investigation was announced on Monday by the UK communications watchdog Ofcom, which said it is assessing whether X breached its duties under the country’s online safety rules. The case centres on reports that Grok, an AI chatbot developed by xAI and integrated into X, produced sexually intimate images that some critics have described as deeply offensive and harmful.
At the heart of the inquiry is whether such material constitutes illegal content under UK law, particularly in relation to non consensual deepfake imagery. British authorities have increasingly tightened regulations around synthetic media, especially when it involves sexualised depictions that could cause serious distress or reputational harm to individuals.
Grok was introduced as a conversational AI designed to answer questions and generate content in a more informal and edgy tone compared with rival chatbots. However, regulators and digital safety groups argue that this positioning may increase the risk of harmful outputs if safeguards are insufficient. The current probe will examine how Grok’s content moderation systems operate and whether X took adequate steps to prevent or swiftly remove problematic material.
Ofcom said the investigation is part of its broader mandate to enforce online safety standards across major digital platforms. Under the UK’s evolving regulatory framework, companies are expected to proactively assess risks posed by their services and implement effective measures to prevent illegal and harmful content from reaching users.
The case adds to growing scrutiny of X’s approach to moderation since it was acquired by Elon Musk. Since the takeover, the platform has significantly reduced its trust and safety workforce and positioned itself as a champion of free expression. Critics argue that these changes have weakened oversight at a time when generative AI tools are becoming more powerful and harder to control.
Legal experts say the outcome of the investigation could have implications beyond a single platform. If Ofcom determines that X failed in its duty of care, it could set a precedent for how AI generated content is regulated in the UK. This would be particularly relevant for companies deploying generative models capable of producing images, audio or video that closely resemble real people.
X has previously said it is committed to complying with local laws and improving the safety of its AI systems. However, the company has also warned against what it sees as excessive regulation that could stifle innovation. It remains unclear how X will respond formally to the probe or whether changes to Grok’s design or deployment will be required.
The investigation comes amid a wider international debate over how to regulate artificial intelligence. Governments across Europe and North America are grappling with how to balance innovation with protections against misuse, especially as deepfake technology becomes more accessible to the public.
For UK regulators, the case underscores a growing focus on platform accountability in the age of generative AI. As tools like Grok become more deeply embedded in social media ecosystems, authorities are signalling that companies will be held responsible not only for user generated content but also for what their algorithms and AI systems produce.
















