Can Meta effectively address the issue of AI-generated pornographic deepfakes involving female public figures? The oversight board is seeking answers.
Meta’s oversight board is currently investigating two instances of AI-generated images depicting nude female public figures.
The first case pertains to a pornographic deepfake featuring an American female figure on Facebook. The oversight board will assess Meta’s existing policies regarding the management of explicit AI-generated content.
On Tuesday, Meta’s oversight board disclosed its inquiry into the tech company’s protocols concerning explicit AI deepfakes portraying women.
Specifically, the board is scrutinizing two distinct cases, one on Instagram and the other on Facebook.
One of the incidents involves an AI-manipulated image resembling a naked American female public figure alongside a man touching her inappropriately. The image was accompanied by the figure’s name in the caption and was shared within a Facebook group dedicated to AI creations.
The oversight board did not disclose the identity of the female public figure depicted in the AI deepfake.
It was noted that a different user had initially shared the AI-generated nude image before its circulation within the Facebook group. Subsequently, the explicit content was removed due to a violation of the Bullying and Harassment policy related to “derogatory sexualized photoshop or drawings.”
The user responsible for the post contested its removal, which was initially upheld by the automated system. Following an appeal to the Board, the decision was maintained.
Earlier in January, the circulation of pornographic deepfakes featuring Taylor Swift on X, formerly known as Twitter, garnered significant attention online. These images depicted the pop star engaging in explicit acts within football stadiums.
One particular post shared by a verified user on X amassed over 45 million views before it was eventually removed by moderators approximately 17 hours later.
The second investigation involves an AI-generated image of a nude woman resembling a prominent public figure from India. This content was published on an Instagram account dedicated to sharing AI-generated images of Indian women.
In this instance, Meta initially failed to remove the content despite two separate reports. Following an appeal to the Board, Meta acknowledged an error in retaining the post and subsequently took it down for violating the Bullying and Harassment Community Standard.
The oversight board deliberately selected these cases to evaluate Meta’s effectiveness in addressing explicit AI-generated visual content.
Various political figures, public personalities, and business leaders have expressed concerns regarding the prevalence of deepfakes and the associated risks.
White House press secretary Karine Jean-Pierre emphasized the disproportionate impact of lax enforcement of deepfakes on women and girls, who are often the primary targets.
Jean-Pierre advocated for social media platforms to proactively ban harmful AI-generated content, in addition to the potential role of legislation in combating this issue.
Microsoft CEO Satya Nadella condemned the circulation of nude deepfakes featuring Taylor Swift, labeling them as “alarming and terrible,” and advocated for enhanced protection against such content.
Beyond pornography, deepfakes have the potential to influence elections, as evidenced by AI-generated calls impersonating messages from President Joe Biden during the primaries.
Meta’s oversight board is actively seeking public input on the aforementioned cases, soliciting suggestions on strategies to address the issue and feedback regarding its severity.
Following deliberations over the next few weeks, the oversight board will publish its decisions. While the board’s recommendations are not binding, Meta is obligated to respond to them within a 60-day timeframe.