Meta has updated its AI labeling policy, broadening the scope of “manipulated media” to encompass deceptive audio and images in addition to AI-generated videos on Facebook, Instagram, and Threads.
The key aspect of this update is Meta’s emphasis on not appearing overly restrictive towards freedom of expression. Instead of outright removing problematic content, Meta has opted to label it accordingly. Two new labels, “Made with AI” and “Imagined with AI,” have been introduced to clarify the origin of the content.
New Warning Labels
The identification of AI-generated content will be based on recognizing AI-authored signals and self-reporting:
“Our ‘Made with AI’ labels for AI-generated video, audio, and images will rely on industry-shared AI signals or self-disclosure by content creators.”
Content that is notably misleading may receive more visible labels to help users discern the nature of the content.
Any harmful content that breaches Community Standards, such as instigating violence, election manipulation, bullying, or harassment, will be subject to removal, regardless of whether it was created by humans or AI.
Rationale Behind Meta’s Policy Update
The initial AI labeling policy, established in 2020, primarily targeted deceptive videos, particularly those featuring public figures making fabricated statements. Acknowledging technological advancements, Meta’s Oversight Board recognized the necessity for an updated policy that includes AI-generated audio and images alongside videos.
User-Driven Feedback
Meta’s policy revision process anticipated feedback from various stakeholders. The updated policy reflects extensive input from a diverse range of stakeholders and the general public. Moreover, the policy retains the flexibility to adapt as necessary.
In Meta’s own words:
“In Spring 2023, we initiated a review of our policies to align with rapid technological progress… We engaged with over 120 stakeholders across 34 countries globally. The feedback overwhelmingly supported labeling AI-generated content, especially in high-risk scenarios. Many stakeholders endorsed the idea of content creators self-identifying AI-generated content.”
Public opinion research involving 23,000 respondents from 13 countries revealed strong support (82%) for warning labels on AI-generated content featuring misrepresented statements.
The Oversight Board’s recommendations were influenced by consultations with civil society groups, academics, inter-governmental bodies, and other experts.
Collaborative Approach and Consensus
Meta plans to maintain pace with technological advancements by collaborating with organizations like the Partnership on AI, governmental bodies, and non-governmental organizations. The revised policy underscores the importance of transparency and context in handling AI-generated content, prioritizing content labeling over removal based on Community Standards violations.