Written by 10:10 am AI policies, Deepfakes, Latest news

– Meta Expands Deepfake Detection Policies and Enhances Context for ‘High-Risk’ Manipulated Content

Meta has announced changes to its rules on AI-generated content and manipulated media, following cr…

Meta has unveiled revisions to its regulations concerning AI-generated content and manipulated media subsequent to receiving backlash from its Oversight Board. Commencing next month, the corporation plans to categorize a broader spectrum of such content, which will involve affixing a “Made with AI” insignia to deepfakes. In instances where content has been altered in ways that present a significant risk of misleading the public on crucial matters, supplementary contextual details may be provided.

This adjustment may result in the prominent social networking platform labeling a greater number of potentially misleading content pieces, a crucial step in a year marked by numerous global elections. Nevertheless, in the case of deepfakes, Meta will only apply labels if the content exhibits “industry standard AI image indicators” or if the uploader acknowledges that it is AI-generated content.

AI-generated content falling outside these parameters will likely remain unlabeled. The policy modification is anticipated to result in a higher volume of AI-generated content and manipulated media persisting on Meta’s platforms, as the focus shifts towards a strategy centered on “offering transparency and additional context” as the preferred method of handling such content, instead of outright removal, which poses risks to freedom of speech.

Consequently, the strategy for dealing with AI-generated or manipulated media on Meta platforms like Facebook and Instagram seems to lean towards more labeling and fewer takedowns. Meta declared that it will cease the sole removal of content based on its existing manipulated video policy in July, allowing individuals time to familiarize themselves with the self-disclosure process before the cessation of removals for the smaller subset of manipulated media.

This strategic shift may be a response to escalating legal pressures on Meta regarding content moderation and systemic risks, including the European Union’s Digital Services Act. The EU law, in effect since last August, imposes regulations on Meta’s primary social networks, compelling the company to navigate a delicate balance between eliminating illicit content, mitigating systemic risks, and safeguarding freedom of speech. The EU is intensifying scrutiny on platforms ahead of the upcoming European Parliament elections in June, urging tech giants to watermark deepfakes wherever feasible.

The looming US presidential election in November likely factors into Meta’s considerations as well.

Oversight Board critique

Meta’s advisory Board, funded by the tech behemoth but operating independently, reviews a minute fraction of its content moderation decisions and can offer policy recommendations. While Meta is not obligated to adopt the Board’s suggestions, it has agreed to adjust its approach in this instance.

In a blog post released on Friday, Monika Bickert, Meta’s VP of content policy, conveyed that the company is amending its policies on AI-generated content and manipulated media in response to the Oversight Board’s feedback. Acknowledging the Board’s viewpoint that the existing approach is overly restrictive, as it solely pertains to videos altered by AI to depict individuals making statements they did not, Meta concurred with the need for a broader scope.

The Oversight Board had previously urged Meta to reconsider its stance on AI-generated content following a case involving a doctored video of President Biden insinuating a sexual undertone to a familial gesture. While the Board supported Meta’s decision to retain the specific content, it criticized the company’s policy on manipulated media as inconsistent, noting that it solely addressed AI-created videos, thereby exempting other forms of fake content such as doctored videos or audio.

Meta appears to have heeded the critical feedback by broadening its approach. Recognizing the evolution of AI technology to encompass realistic audio and photos beyond video manipulation, Meta is expanding its labeling of synthetic media based on industry-shared AI signals or self-disclosure by content uploaders.

The revised policy will encompass a wider range of content beyond manipulated media, as recommended by the Oversight Board. Meta aims to provide more prominent labels for digitally-created or altered content that poses a significant risk of deceiving the public on crucial matters, enabling users to make informed assessments and gain context when encountering similar content elsewhere.

Meta clarified that it will refrain from removing manipulated content, whether AI-generated or otherwise altered, unless it violates other policies such as voter interference, harassment, violence, or incitement. Instead, the company may introduce informational labels and context in scenarios of heightened public interest.

To address risks associated with manipulated content, Meta highlighted its collaboration with nearly 100 independent fact-checkers tasked with identifying and mitigating such risks. These external entities will continue to evaluate false or misleading AI-generated content, prompting Meta to adjust algorithms to reduce the reach of flagged content and append informational overlays for users who come across it.

As synthetic content proliferates due to advancements in generative AI tools, third-party fact-checkers are expected to face a growing workload. Consequently, more of this content is likely to persist on Meta’s platforms following this policy adjustment.

Visited 2 times, 1 visit(s) today
Tags: , , Last modified: April 6, 2024
Close Search Window
Close