A growing number of creators on YouTube are reporting sudden channel terminations and strikes issued without prior warning or clear explanation. Creators across content categories say they have seen “unexplained strikes or terminations” despite no prior policy notice or outreach.
Reports of unexplained enforcement
Several creators say their channels were penalized or removed following alleged violations that were never specifically detailed. Others claim their channels were linked to separate accounts that had received strikes, despite no connection between them.
YouTuber ThatStarWarsGirl posted on X that one of her long-used clips, featuring “Star Wars” producer Kathleen Kennedy’s face, was suddenly flagged for “sexual content.” Days later, on November 7, she received a strike on the same video.
“Last week you flagged me for ‘sexual content’ of a clip of Kathleen Kennedy’s face I’ve used for YEARS, and now you STRIKE ME!? Again, this is absolute BS! Please fix this now!” she wrote in her post. The creator stated that she appealed the decision, but the appeal was rejected almost instantly. YouTube has since responded, but reportedly moved the discussion with her into private messages.
Another creator, Shaun Young, shared that his channel was unexpectedly terminated for “severe or repeated violations” of YouTube’s spam, deceptive practices, and scams policy. He said he had not received any prior strikes or communication before the removal notice appeared.
This follows a recent case involving Enderman, a tech YouTuber with more than 350,000 subscribers, whose channel was terminated after YouTube’s automated moderation system reportedly linked it to another account that had been flagged. The creator stated that there was no human review process or direct contact with support before the termination.
Faulty AI moderation system
YouTube’s policies state that severe violations may result in termination without warning, but creators argue that automation and account-linking tools appear to be misidentifying legitimate channels. Creators argue that while AI tools are useful for efficiency, they also increase the risk of false positives and wrongful account removals that can disrupt livelihoods overnight.
