Written by 2:48 am AI problems, Deepfakes

### Tools to Combat the Pervasive Issue of Deepfake Porn

Tools designed to stop AI-generated deepfake porn targeting children, celebrities, and others inclu…

Polly Thompson

Unstable Diffusion has emerged as the forefront of AI image generation.

The proliferation of AI-generated deepfakes is on the rise with the advancement and accessibility of tools. Primarily, deepfakes, most notably in pornographic content, are being widely disseminated targeting individuals across various demographics, including children and celebrities. Efforts are being made to curb this surge in deepfake creation.

The challenges posed by artificial intelligence can be daunting, as for every advantage it offers, there exists a potential for misuse.

A significant issue stemming from AI is the creation of deepfakes—videos, images, or audio produced by artificial intelligence to replicate individuals saying or doing things they never actually did.

Some deepfakes overlay a person’s likeness onto authentic video footage, while others are entirely computer-generated.

Research conducted in 2019 titled “The State of Deepfakes” by Deep Trace revealed that a staggering 96% of deepfake videos were of a pornographic nature.

Henry Ajder, a researcher specializing in deepfakes who contributed to the study, highlighted that while the percentage may have shifted, the problem persists.

With the increasing accessibility of AI tools like Dall-E, Stable Diffusion, and Midjourney, individuals with minimal technical expertise can easily create deepfakes.

Although the proportion of pornographic deepfake content has decreased, the overall volume has surged, leading to millions of individuals worldwide becoming victims of this malicious practice.

Despite the fabricated nature of the content, the emotional distress, embarrassment, and intimidation experienced by the victims are undeniably real.

Tragically, a British teenager took her own life in 2021 after being targeted with deepfake pornographic images that were circulated by fellow students via Snapchat, as reported by BBC News.
The circulation of deepfake porn involving pop icon Taylor Swift has drawn attention to the issue.
Recently, deepfake pornographic content featuring Taylor Swift began circulating online, prompting action from Elon Musk to restrict searches related to the music sensation on X.

While the situation appears grim, there are tools and strategies available to safeguard against identity manipulation through AI.

Detection of Deepfakes

One proposed solution is the use of digital watermarks, which clearly indicate content generated by AI. This approach has received support from the Biden administration.

These watermarks serve to raise public awareness and facilitate platforms in identifying and removing harmful fake content.

Tech giants like Google and Meta have announced their intentions to label material created or altered by AI with a “digital credential” to enhance transparency regarding content origins.

OpenAI, known for developing ChatGPT and the image generator DALL-E, plans to incorporate both visual watermarks and concealed metadata that disclose the image’s history, aligning with the Coalition for Content Provenance and Authenticity (C2PA) standards.

Additionally, specialized platforms have been designed to verify the authenticity of online content. Sensity, the organization behind the 2019 deepfake study, has introduced a detection service that notifies users via email when they encounter content exhibiting distinct AI-generated characteristics.

Even when the artificial nature of an image is apparent, individuals depicted may still feel victimized.

‘Poison Pills’

Protective tools that shield images from manipulation are considered a more robust solution, albeit still in the nascent stages of development.

These tools offer users the ability to process their images with an imperceptible signal that, when processed by any AI system, results in an unusable, blurred image.

For instance, Nightshade, a tool devised by researchers at the University of Chicago, introduces pixels to images that distort when analyzed by AI models, yet maintain the intended appearance for human viewers.

Ben Zhao, one of the researchers, explained to NPR, “You can think of Nightshade as adding a small poison pill inside an artwork in such a way that it’s literally trying to confuse the training model on what is actually in the image.”

While primarily aimed at safeguarding artists’ intellectual property, this technology is applicable to any photograph.

Ajder remarked, “This serves as a crucial frontline defense to reassure individuals when sharing photos, such as those from a friend’s birthday celebration.”

Impact of Regulation

Several states have implemented varying legal protections for victims of deepfakes, as reported by The Associated Press.

Recent high-profile incidents have intensified the pressure on legislators to prevent and penalize the malicious utilization of AI and deepfakes.

Following hoax calls during the New Hampshire primaries utilizing an AI-generated voice resembling Joe Biden, the Federal Communications Commission prohibited AI-generated robocalls.

In January, a bipartisan group of senators introduced the DEFIANCE Act, a federal bill enabling victims to pursue legal action against individuals disseminating sexual deepfakes of them, shifting the issue from criminal to civil jurisdiction.

Although a bill introduced by Rep. Joe Morelle in May to criminalize the dissemination of deepfakes has not progressed, legislative efforts face opposition from advocates of free speech. Some argue that privately creating deepfakes is akin to a harmless fantasy, raising questions about the harm caused if such content remains undisclosed.

Legislation in the UK, under the Online Safety Act, has made it illegal to distribute deepfake porn, though not to create it.

Ajder contends that creating such content opens avenues for dissemination, unlike a mere fantasy. Criminalizing the production of deepfake porn, despite enforcement challenges, serves as a vital deterrent. Some individuals engage in creating AI-generated content for personal consumption, but emphasizing the criminality of such behavior is crucial to dissuade those driven by curiosity.

Governments can exert pressure on search engine providers, AI developers, and social media platforms to curb the proliferation of AI-generated content.

In India, a scandal involving deepfake porn featuring Bollywood actresses prompted expedited legislation and calls on major tech companies to prevent the dissemination of AI-generated content online.

Ajder acknowledges the persistent challenge posed by deepfakes, emphasizing the importance of introducing obstacles to deter individuals from creating such content intentionally.

Visited 2 times, 1 visit(s) today
Tags: , Last modified: February 19, 2024
Close Search Window
Close