YouTube has responded to questions about its use of automated moderation after disclosing that roughly 12 million channels were terminated in 2025. The company said the majority of removals were the result of enforcement against spam, scams, impersonation and other forms of deceptive or repetitive content, much of which is identified through automated systems.
The use of AI for moderation
According to YouTube, AI-based tools are used to detect policy violations at scale, often flagging channels before content gains wider distribution. The platform stated that automation plays a central role in managing enforcement across billions of uploads, while human reviewers are involved in more complex cases and in reviewing appeals.
“AI will make our ability to detect and enforce on violative content better, more precise, able to cope with scale,” YouTube CEO Neal Mohan said.
The platform was under fire after creators raised concerns about false positives and limited explanations for enforcement actions. YouTube said that while automation is responsible for many initial decisions, its systems are designed to prioritize clear policy violations and reduce harm across the platform.
Enforcement processes and appeals
YouTube reiterated that creators whose channels are terminated are permitted to submit an appeal through YouTube Studio. The company noted that each channel is eligible for one appeal and encouraged creators to include detailed information to support their case. YouTube did not provide data on how many terminations are overturned following review.
The platform acknowledged that enforcement decisions can be frustrating, particularly when creators believe removals were made in error. It said work is ongoing to improve how policy explanations are communicated and how moderation tools interpret context.
YouTube said automated enforcement remains necessary due to the volume and speed of content uploaded to the platform. The company added that it continues to invest in improving detection accuracy, especially as generative AI tools contribute to increased volumes of low-quality or misleading content.
While YouTube maintains that automation is a core part of its moderation strategy, it said systems are regularly updated based on feedback and review outcomes.
