Tech
Government faces criticism over delays to deepfake law as concerns grow around AI misuse

Campaigners warn of growing harm from AI generated abuse
The UK government is under increasing pressure from campaigners who say it is moving too slowly to introduce legislation banning the creation of non consensual sexualised deepfakes. Critics argue that delays are leaving victims exposed at a time when artificial intelligence tools are becoming more accessible and powerful. The concern has intensified following renewed attention on AI systems capable of generating realistic images, audio, and video without a subject’s consent, often with devastating personal consequences.
What the proposed law is meant to do
The proposed legislation would make it a criminal offence to create or share sexualised deepfake content without consent. Campaigners stress that the harm caused by such material is not hypothetical. Victims often face humiliation, psychological distress, and long term reputational damage, even when the content is proven to be fabricated. Unlike traditional image abuse, deepfakes can be created without any original explicit material, meaning anyone with publicly available photos can be targeted.
Why Grok AI has become part of the debate
Recent controversy around AI tools such as Grok, developed by xAI, has sharpened scrutiny of government inaction. While Grok itself is marketed as a conversational system, campaigners say its existence highlights how rapidly AI capabilities are evolving across the tech sector. As tools become more sophisticated, the line between benign experimentation and harmful misuse grows thinner. Critics argue that legislation has failed to keep pace with these developments, creating a gap between technological reality and legal protection.
Victims caught in a legal grey area
One of the central frustrations for advocacy groups is that victims currently have limited legal recourse. Existing laws often focus on the distribution of explicit images, not their creation, and struggle to address synthetic media that uses no real sexual imagery. As a result, perpetrators can evade accountability while victims are left to pursue civil remedies that are costly, slow, and emotionally draining. Campaigners say this legal ambiguity effectively signals tolerance of abuse.
Government response and political caution
Ministers have previously acknowledged the risks posed by deepfakes and expressed support for stronger protections. However, critics say progress has stalled amid wider debates over online safety, free expression, and regulatory scope. The government has pointed to ongoing consultations and broader online safety frameworks as evidence of action, but campaigners argue that these processes are moving at a pace that does not reflect the urgency of the threat.
The wider implications for online safety
The deepfake debate intersects with broader concerns about digital harm, including misinformation, harassment, and identity abuse. As AI generated content becomes harder to distinguish from reality, trust in digital spaces erodes. Advocacy groups warn that failure to act decisively on sexualised deepfakes could undermine public confidence in online safety reforms more generally. They argue that clear, targeted legislation would send a signal that certain forms of AI misuse are unequivocally unacceptable.
International comparisons add pressure
The UK is not alone in grappling with the issue, but campaigners note that some other jurisdictions are moving faster. Countries in Europe and parts of Asia have begun introducing specific offences related to deepfake abuse, creating expectations that the UK should follow suit. Falling behind, critics argue, risks turning the country into a permissive environment for perpetrators operating across borders.
Calls for urgency and accountability
Advocacy groups are now urging ministers to set a clear timeline for introducing the law and to prioritise protections for victims over prolonged consultation. They argue that every delay increases the likelihood of further harm, particularly for women and young people who are disproportionately targeted. The debate has become a test of whether policymakers can respond effectively to fast moving technological threats.
A defining moment for AI governance
The controversy highlights a broader challenge facing governments worldwide. Artificial intelligence is advancing faster than traditional lawmaking processes. How the UK responds to the issue of non consensual deepfakes may shape public trust in its ability to regulate emerging technologies responsibly. For campaigners, the message is simple. The harm is already happening, and the law needs to catch up before the gap grows wider.
















