In a detailed update published November 13, 2025, YouTube addressed a wave of creator feedback over its content-moderation system and appeal workflow. The company said it reviewed hundreds of social-media posts and associated channels alongside its Trust & Safety team, affirming that the majority of enforcement decisions were correct and found no widespread “bug” in the system.

One high-profile case involved is the creator Enderman, whose channel with more than 350k subscribers was terminated after an automated system allegedly linked it to another flagged account. The creator said no human review occurred before the ban. Separately, reported spikes in unexplained strikes and removals have added pressure on YouTube to clarify how automated moderation and account linking operate.

Automation and human review

YouTube reiterated its combined approach of automated detection and human oversight: machines flag content rapidly at scale, while human reviewers handle complex or edge-case situations. The platform cited violations such as mass uploading of auto-generated or low-quality content, content scraped from others and videos designed to mislead viewers off-platform.

When it comes to appeals, YouTube clarified that creators have a one-year window and are allowed one appeal per termination via YouTube Studio. Multiple appeals beyond that timeframe or count will receive standard automated responses. The company acknowledged limited communication clarity in past decisions and said it is working to enhance transparency around policy language and decision-timing.

“The vast majority of termination decisions were upheld, with only a handful of nuanced cases being overturned,” YouTube wrote on its forum.

“One area we’re working on is providing more specific policy descriptions and timestamps, as we know this is a top request from creators,” the platform added.