TikTok, the short-form video platform owned by Chinese parent company ByteDance, has confirmed plans to restructure its UK trust and safety operations, placing hundreds of moderation jobs at risk. The restructuring is part of a global reorganization designed to consolidate moderation teams into fewer hubs worldwide. Similar changes are also being rolled out in South and Southeast Asia, with Malaysia singled out as another country where roles will be heavily affected.
The decision reflects TikTok’s broader effort to rely more heavily on artificial intelligence for content moderation, reducing its dependence on human reviewers. The company framed the move as part of an ongoing efficiency drive that began last year and is now being extended to Europe.
Rising Reliance on AI Moderation
Content moderators are responsible for screening harmful or illegal material, including hate speech, misinformation, and explicit content. With more than 1.5 billion global users and over 30 million monthly users in the UK alone, TikTok requires enormous resources to keep its platform safe.
The company revealed that moderation technologies already account for the vast majority of enforcement. AI systems now remove more than 85% of content flagged for violating TikTok’s community standards, often before the material has been reported by users. The firm also highlighted that AI tools are increasingly used to limit moderators’ exposure to disturbing material, reducing the psychological risks associated with reviewing harmful content.
While AI is clearly efficient, critics warn that algorithms cannot always understand cultural nuance, political context, or satire, and may fail to spot dangerous material that slips through automated filters.
Layoffs and Relocations Across Europe
The reorganization will hit TikTok’s trust and safety teams in the UK hardest. Analysts estimate that around 300 employees in moderation roles could be affected. TikTok has suggested that many of these responsibilities will be transferred to offices in other European cities, such as Dublin or Lisbon, or outsourced to third-party service providers.
Despite the restructuring, TikTok insists it will maintain a trust and safety presence in the UK, though the scale of this presence will be far smaller. The company argues that concentrating moderation teams in fewer locations will streamline operations and allow it to make greater use of technological advances.
Union and Worker Backlash
The announcement has triggered criticism from labor representatives and digital safety advocates. The Communication Workers Union, which represents some TikTok employees, expressed strong concern over the cuts. Union leaders argue that removing large numbers of experienced human moderators in favor of AI could undermine platform safety and expose users to harmful material. They warn that AI, while powerful, remains imperfect and cannot replace the judgment and cultural awareness of human reviewers.
Campaigners have also raised questions about whether shifting moderation responsibilities to third-party providers or other jurisdictions will lower accountability and increase risks for UK users.
Regulatory Pressures in the UK
The restructuring comes at a sensitive moment, as the UK government rolls out its Online Safety Act. This legislation requires digital platforms to quickly detect and remove harmful or illegal content, with financial penalties of up to £18 million or 10% of global revenue for failures to comply.
Analysts note that TikTok’s pivot to AI is partly a response to these new legal demands. Automated moderation systems can detect and remove problematic content faster than human teams, reducing the risk of fines. However, the move raises questions about whether AI systems can meet the higher standards of accountability and accuracy required by UK regulators.
Investments and Contradictions in the UK Market
The restructuring contrasts with TikTok’s recent promises to expand its UK operations. Just two months earlier, in June, the company announced plans to create 500 new jobs in Britain, describing the country as its largest community in Europe. Around half of the UK’s population—over 30 million people—now uses TikTok every month, making it a crucial growth market for the platform.
The new investment was widely interpreted as a signal of TikTok’s long-term commitment to the UK, despite ongoing political and security concerns. The layoffs in trust and safety therefore appear contradictory, raising doubts about how TikTok balances workforce investment with its reliance on technology.
TikTok’s Data Security Concerns and Political Scrutiny
TikTok has long been in the spotlight of Western governments, including the UK and the US, over concerns that its Chinese ownership could pose security risks. Critics fear that personal data collected from millions of users could potentially be accessed for espionage or political influence. These concerns have led to bans of the app on government devices in several countries and ongoing investigations into its operations.
Against this backdrop, the reduction of human moderators in the UK could intensify scrutiny. Regulators and policymakers may question whether AI-driven moderation can ensure user safety while also protecting against misinformation, disinformation, and harmful political content.
Financial Strength Despite Job Cuts
Despite the layoffs, TikTok remains financially robust. In Europe alone, the platform’s 2024 revenue grew by nearly 40% to $6.3 billion, while operating losses narrowed substantially. The company’s financial momentum underscores that the restructuring is not about cost-cutting alone but reflects a strategic decision to double down on automation and technology in content moderation.
What Comes Next
TikTok’s UK trust and safety teams now face uncertainty as restructuring plans advance. Employees whose roles are being phased out are expected to receive relocation opportunities in other European offices or severance packages. Meanwhile, TikTok will continue deploying AI systems as the backbone of its moderation strategy.
For users, the changes raise pressing questions about the balance between automation and human oversight. While AI is fast and scalable, human moderators remain critical in understanding nuance, context, and cultural sensitivity. The long-term test will be whether TikTok can maintain user safety and comply with tough new regulations while leaning more heavily on artificial intelligence.
Key Facts at a Glance
| Category | Details |
|---|---|
| Scope of Restructuring | UK trust & safety teams, with several hundred roles at risk |
| Other Regions Affected | South and Southeast Asia, especially Malaysia |
| AI Moderation Share | Over 85% of removed content is already taken down by AI |
| Estimated UK Impact | Around 300 jobs affected, with some work shifted to Dublin, Lisbon, or external firms |
| Union Concerns | Risk to user safety from over-reliance on AI; loss of accountability |
| Regulatory Backdrop | UK Online Safety Act imposes strict moderation obligations and heavy fines |
| UK Market Size | Over 30 million monthly UK users; half the population |
| Financial Results | European revenue up 38% in 2024 to $6.3B; operating losses significantly reduced |
| Political Scrutiny | Ongoing concerns about Chinese ownership and data security risks |
| Future Commitment | Recent June 2025 pledge to add 500 UK jobs despite trust & safety cuts |
TikTok’s restructuring of its UK trust and safety operations illustrates the platform’s growing dependence on artificial intelligence for content moderation. With hundreds of jobs at risk, unions and safety advocates are raising alarms about whether AI can adequately replace human oversight.
The timing—coinciding with the enforcement of the UK’s Online Safety Act—underscores the company’s push to demonstrate compliance through faster, technology-driven responses. Yet this strategy also deepens the debate about the human cost of automation and whether platforms can maintain high safety standards while cutting back human moderation roles.
As TikTok continues to expand its user base and revenues, the long-term question remains: can automation alone ensure safety, accountability, and trust on one of the world’s most influential social media platforms?







