Written by 12:12 am AI, Discussions, Uncategorized

### Surging Demand: AI-Powered Deepfake Nude Services Gain Momentum

Analytics firm Graphika says the amount of non-consensual intimate imagery online has grown 2,408% …

As per a recent report on non-consensual intimate imagery (NCII), the phenomenon of producing harmful deepfakes has extended well beyond the domain of celebrities and public figures.

A statement by social media analytics firm Graphika, issued on Friday, highlights the increasing trend of “AI undressing.” This technique involves utilizing advanced AI tools to digitally remove clothing from images uploaded by users.

In a report by Kotaku, the gaming and Twitter streaming community engaged in discussions earlier this year following an incident involving popular broadcaster Brandon ‘Atrioc’ Ewing inadvertently disclosing his viewing of AI-generated pornographic content featuring female players, whom he referred to as his companions.

In March, Ewing returned to the spotlight, expressing remorse and detailing his efforts to mitigate the harm caused. Nevertheless, the incident left the entire online community vulnerable.

Graphika’s findings suggest that the incident was a fleeting occurrence.

Santiago Lakatos, an intelligence analyst at Graphika, stated, “Utilizing data from Meltwater, we tracked the surge in comments and articles on Reddit and X containing links to 34 websites and 52 programs offering NCII services. The volume spiked by 2,408% year-over-year, totaling 1,280 in 2022 compared to over 32,100 thus far this year.”

The surge in NCII cases, as outlined by Graphika based in New York, illustrates the evolution of these tools from niche forums to a burgeoning industry.

According to Graphika, “these models empower a wider array of providers to swiftly and cost-effectively generate realistic NCII at scale.” Without such services, clients would need to develop, maintain, and operate their own bespoke image manipulation models, a labor-intensive and sometimes expensive endeavor.

The proliferation of AI undressing tools, as highlighted by Graphika, raises concerns about targeted harassment, sextortion, and the creation of child sexual abuse material in addition to fake pornographic content.

Providers of AI undressing services promote their offerings on social media, guiding customers to their websites, personal Telegram chats, or Discord servers where the tools are accessible, according to Graphika’s report.

Some services overtly advertise “undressing” services, showcasing images of individuals as proof, while others incorporate key terms related to NCII discreetly in their profiles and posts, presenting themselves as AI expertise services or Web3 image repositories.

While AI has been predominantly utilized for creating deepfake videos featuring celebrities, such as the likeness of YouTube personality Mr. Beast, the focus of devious AIs has primarily been on manipulating images. Renowned Hollywood actor Tom Hanks was a subject of such manipulation.

To combat the persistent threat posed by AI deepfakes, some individuals, including Scarlett Johansson and Indian artist Anil Kapoor, are resorting to legal recourse. While mainstream celebrities garner more media attention, child performers assert their voices are often overlooked.

Tanya Tate, a prominent adult performer and head of Star Factory PR, previously remarked to Decrypt, “It’s undeniably challenging. I’m sure it’s much easier if someone is in the mainstream.”

Tate highlighted the prevalence of fake accounts on social media misusing her likeness and content, even in the absence of AI and algorithmic advancements. The ongoing stigmatization faced by sex workers, compelling them and their supporters to remain in the shadows, exacerbates the situation.

In a separate report released in October, the Internet Watch Foundation (IWF) in the UK disclosed the discovery of over 20,254 images of child abuse on a dark web forum within a month. The IWF cautioned that AI-generated child exploitation material was saturating the internet.

The IWF warned that the proliferation of algorithmic pornography has reached a stage where distinguishing between AI-generated and authentic images has become increasingly challenging, leading law enforcement to pursue virtual entities rather than real victims of abuse, owing to advances in generative AI imaging.

Dan Sexton, the Chief Technology Officer of the Internet Watch Foundation, conveyed to Decrypt, “The ongoing issue is that you can’t ascertain the authenticity of content.” The reliability of indicators distinguishing real from fabricated content remains less than 100%.

Conversely, Ewing has been collaborating with investigators, engineers, researchers, and impacted women since his transgression in January, as reported by Kotaku. Ewing mentioned his financial support to Morrison Cooper, a Los Angeles law firm owned by Ryan Morrison, to aid any female Twitch users requiring assistance in issuing takedown notices to websites sharing their images.

Ewing also referenced the research conducted by algorithmic scientist Genevieve Oh to address the severity of the issue in combating such content. “In the battle against this form of content, I sought out the ‘bright spots,’” Ewing remarked.

Visited 2 times, 1 visit(s) today
Last modified: February 4, 2024
Close Search Window
Close