Technology
TikTok Executive Says Teen Safety Will Not Be Compromised by AI Moderation
With online safety under intense scrutiny, especially in the United Kingdom, TikTok is introducing major updates to how content is monitored and evaluated. At a time when regulators, parents, and young users are paying close attention to digital risks, Sky News secured a rare interview with Ali Law, TikTok’s director of public policy and government affairs for northern Europe, to discuss how the platform plans to protect its community while shifting toward heavier use of artificial intelligence.
Law acknowledged that TikTok is undergoing significant change. The company is adding safety features, updating its internal processes, and rethinking how content moderation works at scale. With millions of daily uploads, the platform relies on a combination of technology and human reviewers to determine what stays online. Now, the balance between those two forces is changing.
AI Takes on a Larger Role in Content Moderation
Artificial intelligence has long been part of TikTok’s moderation system. For years, it has helped filter out content that clearly breaches safety guidelines. According to TikTok, AI currently removes about eighty-five percent of harmful or violative posts automatically. However, the company is now preparing to expand this role significantly, reducing its reliance on human moderators. This shift raises an important question. Can AI systems accurately detect dangerous content without increasing risks to teenagers and other vulnerable groups?
Law believes the answer is yes. He explained that the models TikTok now uses are far more advanced than earlier versions. While previous systems could identify a basic violation, such as the presence of a weapon, today’s models can interpret context, which is essential for making accurate moderation decisions at scale.
How AI Moderation Has Become More Sophisticated
To illustrate the progress, Law offered a simple but powerful example. Older systems could recognise a knife in a video, but they could not understand how it was being used. A cooking demonstration and a violent encounter looked nearly identical to earlier AI models because they focused on object recognition alone. The latest systems, however, can distinguish between the two scenarios by analysing patterns of movement, environment, sound and intent.
This deeper level of understanding allows TikTok to moderate content more precisely and with fewer mistakes. Law emphasised that TikTok sets a high internal standard before deploying any new technology. The company runs extensive checks to ensure new systems are at least as accurate as existing ones, and ideally outperform them. Any rollout happens gradually, and human reviewers continue to oversee the transition so that unexpected errors can be corrected in real time.
Human Oversight Remains Part of the Process
Even as TikTok expands its AI capabilities, the company says it is not completely removing the human element. Human moderators will still be involved in reviewing complex or borderline cases, community specific content and cultural nuances that are difficult for machine learning systems to interpret. Law stressed that the company will not make sudden or reckless changes. AI will take on more tasks, but human judgement remains part of the safety equation.
However, TikTok has confirmed that the number of human moderators employed globally will decline over time. Tens of thousands of reviewers currently work across different regions, and the shift to more automated processes means many of these positions will be phased out. This development has sparked debate about the role of human oversight in digital safety and the potential risks of relying too heavily on automated systems. For now, TikTok argues that improved AI tools allow the company to be more consistent, efficient and accurate than before.
Why TikTok Believes Users Remain Safe
Law said that TikTok’s priority is ensuring that any technological change does not compromise user safety. The company evaluates moderation performance carefully, and only when new systems meet internal safety benchmarks are they deployed. He noted that human oversight is built into each stage of the transition and that AI tools are introduced slowly to monitor performance closely.
Despite concerns from some advocacy groups, TikTok insists that AI is becoming reliable enough to perform many of the tasks previously handled by humans. The platform’s goal is to create a safer environment by removing harmful content more quickly and reducing the emotional burden placed on human reviewers, who often have to watch disturbing material.
A Digital Platform Under Constant Evolution
TikTok’s move toward heavier AI moderation is part of a broader effort to streamline its operations, keep pace with massive user growth and adapt to the increasingly complex nature of online content. Whether this shift will satisfy regulators remains to be seen. Public debate about online safety continues to grow, and platforms like TikTok face continuous pressure to prove they can protect young users.
What is clear from Law’s interview is that TikTok sees AI as an essential part of its future. By combining improved technology with selective human oversight, the company believes it can create a safer and more responsive online experience for its global community.
