Meta is set to revise its policies regarding manipulated and A.I.-generated content by introducing labeling measures ahead of the fall elections. This decision follows criticism from an independent body overseeing the company’s content moderation, which deemed the existing policies “incoherent and confusing” and recommended a reassessment.
The impetus for these changes can be traced back to the Meta Oversight Board’s evaluation earlier this year, prompted by a heavily edited video featuring President Biden that circulated on Facebook. The video was manipulated to depict Mr. Biden engaging in inappropriate behavior towards his adult granddaughter, creating a misleading impression.
While the Oversight Board determined that the video in question did not breach Meta’s guidelines as it was not altered using artificial intelligence (AI) and did not portray the president saying or doing anything he hadn’t, it highlighted flaws in Meta’s current policy framework. The Board criticized the policy as lacking coherence, failing to provide sufficient justification, and placing undue emphasis on content creation methods rather than the specific harms it aims to prevent, such as electoral interference.
In response to these findings, Meta’s Vice President of Content Policy, Monika Bickert, announced in a blog post that the company would commence labeling A.I.-generated content from May onwards. The revised approach will involve affixing “informational labels and context” to manipulated media, instead of solely relying on adherence to community standards for content removal. These labels will encompass a wider spectrum of content beyond manipulated media, particularly focusing on digitally-created or altered images, videos, or audio that pose a significant risk of misleading the public on critical issues.
Acknowledging the Oversight Board’s critique that Meta’s policy was excessively narrow in its scope, Bickert recognized the evolution of AI technology, which now extends to realistic alterations in audio and photos. She concurred with the need to address manipulative content that portrays individuals engaging in actions they did not actually perform.
The Oversight Board welcomed these policy adjustments as substantial improvements in Meta’s handling of manipulated content, signaling a more comprehensive and proactive approach to safeguarding the integrity of information shared on the platform.
This decision comes amidst growing concerns over the proliferation of AI and editing tools enabling the creation of deceptive video and audio content. Instances of misinformation, such as a fabricated robocall impersonating President Biden during the New Hampshire presidential primary, underscore the urgency of addressing such challenges to uphold the integrity of democratic processes. The prevalence of AI-generated content related to prominent political figures like former President Trump and President Biden further underscores the need for vigilant measures to combat misinformation and ensure the authenticity of digital content.