YouTube has recently implemented new guidelines requiring content creators to disclose videos that may deceive viewers and feature content generated by artificial intelligence. This policy update aims to address the proliferation of synthetic content that can mislead users, particularly in sensitive areas like elections, ongoing conflicts, and public health crises.
The platform will now require labels for videos that depict events that never occurred or show individuals saying or doing things they did not actually do. This move is in response to concerns raised by experts about the potential for AI tools to generate convincing yet misleading content, especially as the 2024 elections approach.
In line with this transparency effort, YouTube will introduce a reporting logo for AI-generated content starting early next year. Creators will be expected to prominently display these labels, especially for content discussing sensitive topics. Failure to comply with these disclosure requirements may result in consequences such as suspension from the Partner Program or removal of content.
Furthermore, YouTube will now allow users to request the removal of AI-generated content that simulates identifiable individuals, including their likeness or voice. This change comes amidst rising concerns about the misuse of AI to create non-consensual and potentially harmful content.
Additionally, audio partners on the platform will have the option to request the removal of AI-generated music imitating specific artists’ voices. These measures underscore YouTube’s commitment to maintaining transparency and accountability in the face of evolving AI technologies.