YouTube is introducing an AI-powered age verification system to better identify underage users and comply with the Children’s Online Privacy Protection Act (COPPA). The system uses machine learning to analyze user behavior, such as search history, video categories and account age, to determine if an account is operated by a minor.
A move to protect younger users
If the system identifies a user as underage, it will apply restrictions. These include disabling ads, enabling digital wellbeing tools and limiting repetitive content recommendations. YouTube stated that it aims to provide a safer experience for families while maintaining privacy for younger users.
This system builds on YouTube’s earlier efforts to protect younger users, such as supervised accounts introduced in 2021. The company stated that the AI approach has been tested in other markets and will initially roll out to a subset of U.S. users before broader implementation.
A compliance to promote safety
The system follows YouTube’s $170 million fine in 2019 for COPPA violations, which led to changes like disabling ads and comments on videos aimed at children. However, these changes included disabling ads and comments on videos deemed child-directed, impacted creators who weren’t necessarily targeting young audiences.
The new system also aligns with YouTube’s response to Australia’s upcoming law requiring platforms to prevent children under 16 from creating accounts. The law, effective in December, could result in fines for non-compliance. YouTube’s AI system is expected to play a key role in meeting these requirements.
YouTube’s approach will be closely monitored. The platform’s ability to implement these measures effectively could set a precedent for other digital platforms facing similar regulatory pressures.
