According to The Associated Press, YouTube is introducing new guidelines for AI content. Video producers must now disclose if they utilized generative artificial intelligence (AI) to create realistic-looking content. Failure to disclose the use of AI tools may result in the removal of articles or suspension from the revenue sharing program, as outlined in a recent blog post.
In the blog post, Vice Presidents Jennifer Flannery O’Connor and Emily Moxley emphasized the potential of generative AI to stimulate creativity on YouTube. However, they highlighted the importance of balancing these opportunities with the responsibility to protect the YouTube community.
These guidelines expand on the previous requirements set by Google, YouTube’s parent company, in September. They mandated clear warning labels on political advertisements based on artificial intelligence across YouTube and other Google platforms.
Under the new changes set to take effect next year, YouTubers will have more options to specify if they are sharing AI-generated videos that depict events that never occurred realistically or show individuals engaging in actions they did not actually perform.
O’Connor and Moxley emphasized the significance of transparency, especially concerning sensitive topics like elections, ongoing conflicts, public health issues, and public figures. In such cases, viewers will be informed of enhanced videos through labels displayed on the YouTube video player.
YouTube is utilizing AI to identify and filter out content that violates its policies, enabling the detection of “new forms of abuse.” Content that generates AI-based likenesses of identifiable individuals, including their faces or voices, may be removed following an updated privacy complaint process.
Additionally, AI-generated music content that imitates an artist’s unique voice may be subject to removal based on requests from YouTube music partners such as record labels and distributors.