YouTube is rolling out a new artificial-intelligence tool designed to detect when a creator’s likeness is used without permission in AI-generated or manipulated videos.The feature, initially available to creators in the YouTube Partner Program, adds a “Likeness Detection” tab in YouTube Studio where creators can review flagged videos and request removal of unauthorized content that uses their face, voice or identity.
How the tool works
After creators verify their identity, the system begins scanning uploaded content for matches to the creator’s face or other identifying features at scale. YouTube warns that the prototype may flag genuine creator footage alongside manipulated content, so creators must review flagged items carefully and submit removal requests when appropriate. YouTube began testing the feature in partnership with talent agencies and selected creators in late 2024, and is now broadening access across more users.
Added layer of security
For creators whose personal brand and identity are central to their business, the tool offers an added layer of protection as deepfakes and AI-generated content become more sophisticated. It aims to fix the ongoing issues with maliciously altered videos that could damage a creator’s reputation, earnings or relationship with sponsors. However, the tool presents challenges; false positives could waste creators’ time and the system’s effectiveness depends on creators actively reviewing flags and submitting disputes.
As AI-generated content becomes more widespread across social and video platforms, YouTube made the big move that likeness rights are no longer optional but part of the operating baseline for creators and platforms alike.
