Written by 2:38 pm Deepfakes

– Unveiling X’s Prominence in AI Porn: Bobbi Althoff Deepfake Revelation

A lack of moderation and a flood of clout-chasing accounts has turned the platform into “4chan 2.”

When discussions on online forums dedicated to AI-generated adult content began circulating deepfake videos featuring comedian Bobbi Althoff, the videos initially garnered a modest audience, accumulating 178,000 views in the past half-year.

However, the situation took a drastic turn when one of these videos was shared on X, a popular social media platform. This particular deepfake depicted the 26-year-old entertainer in a state of undress engaging in explicit activities, leading to a viral sensation. Within a span of just nine hours, the video amassed over 4.5 million views—exceeding the viewership of traditional adult websites by a significant margin, as per industry data analysis.

Formerly known as Twitter, X was among the first social media platforms to establish stringent guidelines against AI-generated content, acknowledging the dangers posed by deceptive “synthetic media” back in 2020 and affirming their commitment to addressing the issue appropriately.

Under the ownership of Elon Musk, however, X has evolved into a prominent conduit for the dissemination of nonconsensual deepfake pornography. Not only does the platform facilitate the rapid spread of fabricated images and videos in a minimally regulated environment, but it also inadvertently rewards individuals who propagate such malicious content for financial gain.

Describing X as “4chan 2,” analyst Genevieve Oh draws parallels to the infamous unmoderated message board notorious for hosting not only deepfake adult content but also hateful propaganda and glorification of violence. Oh warns that platforms like X are empowering malicious actors to collaborate in defaming prominent women through manipulated visuals and videos.

While federal legislation concerning deepfakes remains absent, certain states like Georgia and Virginia have enacted laws prohibiting the dissemination of AI-generated nonconsensual pornography.

X’s policies prohibit “nonconsensual nudity,” yet enforcement has been lackluster due to significant layoffs and restructuring within the company, particularly in the “trust and safety” division responsible for content moderation.

Musk’s dismissive attitude towards content regulation is evident in his public statements deriding moderation efforts as unnecessary restrictions imposed by authoritarian figures. This laissez-faire approach to content oversight has contributed to the proliferation of deepfake material on the platform.

X’s inadequacy in curbing the spread of deepfakes was underscored when manipulated sexual images of pop icon Taylor Swift went viral, prompting the company to restrict searches related to her name. Despite these challenges, X continues to grapple with effectively addressing the issue, as evidenced by delayed removal of objectionable content and the persistence of posts directing users to illicit material.

Bobbi Althoff, initially recognized for her comedic TikTok content centered around parenting and pregnancy, has garnered a substantial following on social media platforms. Her recent ordeal with deepfake exploitation serves as a stark reminder of the vulnerabilities faced by public figures in the digital age.

As the saga unfolded, Althoff expressed bewilderment and dismay at finding her name associated with explicit content on X, clarifying that the circulated material was entirely fabricated. Despite her efforts to set the record straight, numerous posts linking to the deepfake video remained accessible for an extended period, exacerbating the situation.

The prevalence of deepfakes, generated through AI algorithms that superimpose individuals’ faces onto unrelated bodies, has been a longstanding issue, particularly targeting women and girls across various domains. These malicious creations have been utilized for harassment, embarrassment, and objectification, affecting individuals from diverse backgrounds, including celebrities, politicians, and ordinary individuals.

Platforms like X and messaging services such as Telegram have emerged as hubs for the creation and distribution of deepfake content, with some individuals even monetizing the production of explicit material featuring manipulated imagery.

Efforts to monetize deepfake content have prompted creators to seek broader audiences on platforms like X, aiming to capitalize on the virality of such material. While some deepfake posts have surfaced on other platforms like Instagram and Reddit, they have faced swift removal and garnered significantly less attention compared to X.

In response to the moderation challenges, Musk has proposed community-driven moderation through “Community Notes,” allowing users to flag inappropriate content collaboratively. However, the effectiveness of this approach remains questionable, as evidenced by the delayed implementation of such notes on posts containing deepfake material.

Despite sporadic community notes acknowledging the artificial nature of certain content, the dissemination of deepfakes persists on X, perpetuating the spread of misinformation and exploitation. The lack of proactive measures to combat deepfake proliferation underscores the urgent need for robust content moderation and regulatory frameworks in the digital realm.

Will Oremus contributed to this report.

Visited 4 times, 1 visit(s) today
Tags: Last modified: February 22, 2024
Close Search Window
Close