YouTube is expanding its efforts to combat AI-generated deepfakes by giving more public figures access to its likeness detection technology.
The feature, initially launched for creators in the YouTube Partner Program, will now be available to a pilot group of journalists and political figures.
YouTube expands its AI likeness detection program
YouTube’s likeness detection tool works similarly to its well-known Content ID system, however instead of identifying copyrighted audio or video, it scans uploaded content for AI-generated material that uses a person’s face or likeness.
If the system detects a possible match, the individual can review the flagged video and request removal if it violates YouTube’s privacy policies. Not all takedown requests are automatically granted; the platform says content such as satire or parody may still be allowed depending on the context.
The tool was first introduced last year for certain creators, allowing them to monitor and flag deepfakes that misuse their image. The new rollout expands that protection to journalists, government officials and political candidates.
Addressing the rise of AI deepfakes
The expansion comes as AI-generated deepfakes become more common online. These videos use artificial intelligence to convincingly replicate a person’s face or voice, sometimes making them appear to say or do things that never actually happened.
The issue gained particular attention during recent election cycles, when manipulated videos of public figures circulated widely across social media platforms. YouTube says the goal of the new rollout is to help protect public conversation and prevent misinformation from spreading through AI impersonation.
YouTube’s AI crackdown over the years
Earlier this year, the company introduced new rules requiring creators to disclose when realistic AI-generated content is used in their videos. It has also worked with partners such as talent agency CAA to develop systems that help public figures monitor unauthorized uses of their likeness.
The platform has also expressed support for legislation such as the NO FAKES Act, which aims to give individuals stronger legal control over AI-generated replicas of their voice or appearance.
While the feature is currently limited to a select group, YouTube says it plans to continue developing tools that allow individuals to monitor how their identity is used in AI-generated content.