Connect with us

Tech

NI Politician Leaves X as Grok Deepfake Fears Expose Cracks in Online Safety

Published

on

The growing backlash against artificial intelligence driven deepfakes has taken on a human and political dimension after a Northern Ireland politician announced she is leaving X, citing serious concerns about the platform’s failure to protect users from harm. The decision highlights how debates around AI governance are no longer abstract policy discussions but deeply personal issues affecting public figures and private citizens alike.

A personal line crossed by digital abuse

Cara Hunter, a member of the Social Democratic and Labour Party, said she would quit X after what she described as “complete negligence in protecting women and children online.” Her decision is rooted in personal experience. Four years ago, Hunter was targeted by a deepfake video, an incident that left a lasting impact and shaped her view of how digital platforms handle abuse.

Her renewed concern centres on Grok, the artificial intelligence tool integrated into X, which has been accused of generating sexualised and non consensual imagery. For Hunter, the re emergence of deepfake threats through AI has reinforced a sense that the platform has failed to learn from earlier harms.

Grok and the widening deepfake debate

Grok, developed under the ownership of Elon Musk, allows users to generate and manipulate images using AI. Critics argue that the tool has made it easier to create convincing deepfakes, including intimate images of women without consent. Evidence suggesting the generation of sexualised images of children has intensified public outrage.

While X positions itself as a champion of free expression, critics say the balance has tipped too far toward permissiveness at the expense of safety. Hunter’s departure reflects a growing view that platforms enabling such tools bear responsibility not just for speech, but for foreseeable misuse.

The UK steps in with legal force

Hunter’s decision comes as the UK government moves to bring into force new legislation making it illegal to create non consensual intimate images, regardless of whether they are real or AI generated. The law represents a significant shift in how digital harm is treated, focusing on the act of creation and its impact rather than the technical method used.

The legislation is widely seen as a direct response to the rise of AI generated abuse. By explicitly covering deepfakes, the UK aims to close legal gaps that allowed perpetrators to evade accountability by claiming images were artificial rather than real.

Ofcom’s investigation raises the stakes

Alongside the new law, Ofcom has launched a formal investigation into X. The regulator is examining whether the platform complied with its duties to protect users from illegal and harmful content under UK online safety rules.

This investigation could have far reaching consequences. If Ofcom finds failures in safeguards or response mechanisms, X could face fines, enforcement actions or demands to change how tools like Grok operate within the UK.

Why women and children are central to the issue

Deepfake abuse disproportionately targets women, particularly those in public life. For children, the risks are even more severe, with long term psychological and reputational harm. Hunter’s statement emphasised that AI generated images are not harmless digital artefacts but tools that can be weaponised to intimidate, silence and degrade.

Her exit from X sends a signal that participation in online platforms is increasingly contingent on trust. When that trust erodes, users may choose withdrawal over exposure.

A broader warning for social platforms

The controversy surrounding Grok and Hunter’s departure illustrates a turning point for social media companies. AI innovation is accelerating faster than governance frameworks, but tolerance for inaction is shrinking. Regulators, lawmakers and users are converging around the expectation that platforms must anticipate harm, not merely react to it.

For X, the challenge is no longer reputational alone. It is legal, political and ethical.

More than one politician leaving a platform

Cara Hunter’s decision is symbolic of a broader shift. As laws tighten and investigations deepen, platforms that fail to address AI enabled abuse risk losing not just users, but legitimacy. The debate over Grok is ultimately about who bears responsibility in an AI driven world.

For many, the answer is becoming clearer. Innovation without accountability is no longer acceptable.