Moderating content on one of the world’s largest social platforms is no small task. Every day, users upload over 100 million videos on TikTok.
To keep its community safe, the platform increasingly relies on artificial intelligence alongside human moderation teams.
During the Sub-Saharan Africa Safer Internet Summit held in Nairobi, TikTok shared new insights into how AI technology helps detect harmful or misleading content faster.
The company uses automated moderation tools to identify potential violations of its community guidelines before users even report them.
According to TikTok’s latest enforcement report, more than 14 million videos were removed across Sub-Saharan Africa, with 96.7 percent detected proactively using automated technology.
The platform also requires creators to label realistic AI-generated content, helping viewers understand when a video has been created or modified using artificial intelligence.
TikTok has also partnered with the Coalition for Content Provenance and Authenticity to strengthen transparency around digital media.
New tools such as invisible watermarking and content credentials make it easier to identify AI-generated material across the internet.
These measures aim to ensure that technology designed to power creativity also supports a safe and transparent digital environment.













