TikTok replaces 150 German staff with AI and outsourced workers

BERLIN, GERMANY — TikTok is facing backlash in Germany as 150 trust and safety employees protest mass layoffs, with the company shifting content moderation to AI and outsourced contract workers, The Guardian reports. The move raises concerns over harmful content oversight and worker rights, even as TikTok insists the change will improve efficiency.
Concerns over accuracy and worker rights
The German trade union, Ver.di claims that the automatically screened system identifies benign material like Pride flags as dangerous and fails to catch truly dangerous material.
“AI is not able to really identify problematic pictures or videos, especially when it comes to sophisticated content,” said Kalle Kunkel, a Ver.di spokesperson.
Employees go through as many as a thousand videos per day and feel that AI cannot be as subtle and capable of dealing with moderation as a human can.
TikTok nonetheless claims that the use of AI enables it to remove content that breaches the company’s policy more quickly before users view it. The company asserts that this would lessen the exposure of the human moderators to dangerous videos, which might enhance mental wellness.
There are, however, concerns over the problem of over-dependency on AI, where critics such as Aliya Bhatia, senior policy analyst at the non-profit Center for Democracy and Technology, have expressed fears that, as a result, more errors would occur, placing at risk users, particularly minors.
“Replacing people tasked with ensuring that platforms are safe and rights-respecting for all users, including minors, is going to lead to more mistakes and more harmful experiences,” she said.
Global trend toward AI moderation sparks labor disputes
TikTok’s layoffs in Germany follow a broader pattern of replacing human moderators with AI. Over the past year, the company has cut hundreds of trust and safety staff in Malaysia. Job cuts have also been reported across Asia, Europe, the Middle East, and South Africa as reported by Reuters.
While companies argue AI improves efficiency, unions say it undermines labor rights. In Germany, Ver.di has demanded better severance and extended notice periods, but TikTok has refused negotiations.
The standoff has led to strikes, with workers rallying against what they call an erosion of job security and content safety standards.
Legal and ethical challenges in automated moderation
The European Union has its stringent Digital Services Act (DSA) that mandates platforms such as TikTok to strictly police harmful content or pay a hefty package of fines. The opponents argue that AI moderation may fail to comply, leading to regulatory fines.
Meanwhile, outsourced contractors—who will handle some of TikTok’s remaining moderation—may lack mental health support, unlike in-house staff, as claimed by Kunkel. TikTok claims to invest $2 billion in trust and safety to provide strong protection.
With the stern warnings that the union received during the rally, the lead negotiator of Ver.di, Kathleen Eggerling, stressed, “It seems TikTok may need content moderators to fact-check its internal communications as well. We call on management to stop intimidating strikers.”
With the conflict still ongoing, the situation highlights the tension between corporate budget optimizations and the need for consistent and ethical content moderation.

Independent




