Written by 12:10 am AI Threat, Deepfakes

### Cautionary Tale: Taylor Swift and the Risks of Deepfakes

For years, researchers predicted a huge wave of AI-powered harassment. Now it’s all happening…

For years, researchers had anticipated a significant surge in AI-powered misconduct, and now everything is unfolding on X.

The recent major acquisition by Platformer is being lauded this past weekend. As part of this development, new customers can enjoy a 20% discount on the first year of a monthly subscription. Is it premature to attribute the internet’s challenges to the impact of artificial intelligence on relationships?

The rise of AI has led to an inundation of AI-generated spam that reportedly surpasses human-crafted narratives in Google search rankings, causing substantial losses in the media industry due to reduced advertising revenue.

Novel forms of manipulation and deception have emerged through conceptual AI tools, with instances observed in political contexts like Harlem and New Hampshire this quarter, where fake personas were utilized for misleading purposes. The Financial Times has also highlighted an uptick in the utilization of these systems for banking fraud and scams.

The utilization of conceptual AI tools in harassment campaigns is a pressing issue that warrants attention.

The dissemination of sexually explicit AI-generated images of Taylor Swift on X last Wednesday garnered significant attention. Despite the overuse of the term “going viral,” these images did, in fact, attract a substantial audience.

Jess Weatherbed from The Verge reported:

The post featuring the authenticated user who shared the images garnered over 45 million views, 24,000 reposts, and hundreds of thousands of likes and bookmarks before the account was suspended for violating platform policies. Despite its removal, the post remained active on the platform for approximately 17 hours.

Subsequently, as users engaged with the popular post, the images circulated widely, being reposted across multiple accounts. Some of these accounts are still active, and numerous new counterfeit images have since emerged. The term “Taylor Swift AI” began trending in certain regions, further amplifying the visibility of the images.

This narrative encapsulates a fundamental aspect of X’s current state, reflecting a scenario that is unfortunately not entirely surprising. Following Elon Musk’s takeover of X, the dismantling of its trust and security mechanisms, and the selective enforcement of its policies, the platform has faced a backlash from advertisers and regulatory scrutiny worldwide. (X did not respond to requests for comment.)

Given these circumstances, the proliferation of vivid AI-generated content on the platform is not unexpected. Despite Apple’s reluctance to address the platform’s violations and its lenient policies on explicit content, X remains one of the foremost video applications globally, albeit with a controversial reputation. Notably, it carries an official rating of 17+ for “Infrequent/Mild Sexual Content and Nudity.”

Robust policies, dedicated enforcement teams, and swift action are imperative to differentiate between consensual adult content and AI-generated harassment. The lack of such measures on X has allowed instances like the Taylor Swift incident to amass millions of views unchecked.

However, attributing Swift’s ordeal solely to X’s failures would be misguided. Another critical perspective examines how platforms rejecting requests to moderate content responsibly have inadvertently provided a platform for malicious actors to organize, produce, and disseminate harmful content. Researchers have observed pipelines connecting messaging platforms like Telegram and X, where malicious campaigns are orchestrated, executed, and then propagated.

According to Samantha Cole and Emanuel Maiberg at 404 Media, the dissemination of Swift deepfakes was facilitated through the Telegram-to-X network:

Following the circulation of sexually explicit AI-generated images of Taylor Swift on Twitter, it was revealed that these images originated from a specific Telegram group dedicated to sharing offensive content about women, as reported by 404 Media. The group reportedly leverages a Microsoft text-to-image AI tool among other resources. […]

The same images that surfaced online and went viral on Twitter were first shared on Telegram. Members of the group even joked about potential repercussions following the images’ viral spread on Twitter.

Considering Telegram’s lax stance on prohibiting the exchange of illicit content, it appears unlikely that the platform will face significant consequences. Consequently, Telegram, boasting over 700 million active users, demands as much scrutiny as any other major social media platform.

The technological aspect plays a pivotal role in the narrative surrounding the Swift incident. The images were primarily generated using Microsoft’s experimental conceptual AI tool Designer, currently in its beta phase.

While Microsoft swiftly blocked relevant keywords following the images’ proliferation, it is highly probable that open-source tools will soon produce even more realistic images, surpassing the ones that tainted X this time.

If this were a story of successful moderation—platforms swiftly combating harmful content out of obligation or ethical responsibility—it would be commendable.

However, the unrestricted access to conceptual AI tools, coupled with the negligent policies of social platforms enabling the dissemination of harmful content, poses a growing risk. It is erroneous to assume that only high-profile individuals like Swift will bear the brunt of such abuses.

On platforms like 4chan, troll groups are generating non-consensual intimate images of women attending court proceedings via livestreams. Recent findings also revealed non-consensual intimate deepfakes surfacing prominently in Google and Bing search results. Deepfake creators are actively soliciting requests on Discord and marketing their creations on dedicated websites. Currently, federal legislation addressing deepfakes is lacking, with only ten states having implemented regulations thus far. (Links provided by NBC’s Kat Tenbarge, who extensively covers this domain.)

Given the longstanding warnings from researchers, the prevalence of such abuses is particularly distressing.

Renee DiResta, Research Manager at Stanford Internet Observatory, emphasized in a Threads post, “This is something that was predicted years in advance.” The prevalence of discussions and articles among technology observers highlighting the potential for evolving disinformation tactics, harassment, and non-consensual intimate imagery as prominent forms of abuse underscores the urgency of the issue.

While legislative action on this front has been sluggish over the past decade, there is a window of opportunity for Congress to act promptly. Minimum X CEO Linda Yaccarino is scheduled to testify before Congress this week on child safety concerns. Joining her will be executives from Meta, Snap, Discord, and TikTok.

In 2019, House Speaker Nancy Pelosi’s speech was deliberately slowed down in a misleading video on Facebook, prompting Congress to reprimand the platform. Fast forward to the present, the landscape of manipulated media has evolved significantly, surpassing previous challenges. How many more wake-up calls must policymakers receive before decisive action is taken?

While generative AI holds promise for innovative applications, the current landscape underscores the high cost associated with its misuse. The potential for widespread harm necessitates swift action from those in positions of authority.



This week on the podcast, Kevin and Chris attempt to persuade Chris Dixon of Andreessen Horowitz to reconsider investments in cryptocurrencies. They also delve into the impact of AI on the news industry and HatGPT’s latest developments.

Listen on: Apple, Spotify, Stitcher, Amazon, Google, and YouTube


Whoops

Due to an oversight, the Governing links were inadvertently interposed within Tuesday’s publication for paid subscribers, displacing the Industry links. If you missed them and wish to catch up, the corrected links are provided below. We have promptly rectified this on the website. Apologies for the inconvenience, and a big thank you to all the readers who brought this to our attention.


Governing

  • Apple announced significant revisions to its App Store guidelines in compliance with the European Union’s Digital Markets Act, marking a pivotal development.
  • Prosecutors revealed Apple’s plans to file a complaint in the US against the NSO Group for the Pegasus spyware incidents targeting phone users. (Source: Zac Hall/ 9to5Mac)
  • The FTC initiated inquiries into Microsoft, Amazon, and Google’s acquisitions in OpenAI and anthropology. (Source: David McCabe, The New York Times)
  • OpenAI, despite initial promises of transparency, is reportedly reneging on its commitment to publicize governing documents. (Source: Paresh Dave/ WIRED)
  • Sam Altman engaged in discussions with congressional members regarding the establishment of silicon chip factories in the US. (Source: Jeff Stein and Gerryt De Vynck, Washington Post)
  • The US and China are poised to collaborate on AI safety initiatives, as indicated by Arati Prabhakar, chair of the White House Office of Science and Technology Policy. (Source: Madhumita Murgia, Financial Times)
  • Election conspiracy theories are circulating within the New Hampshire Voter Integrity Facebook Group for the first time. (Source: David Gilbert, WIRED)
  • Researchers have uncovered accessible content glorifying mass violence on social media platforms like TikTok, Discord, Roblox, Telegram, and X. (Source: Moustafa Ayad and Isabelle Frances Wright, Institute for Strategic Dialogue)
  • Meta’s Oversight Board overturned the decision to remove an Instagram post containing false claims about the Holocaust, citing violations of hate speech rules. (Source: Oversight Board)
  • Meta is implementing direct message limits on Facebook and Instagram for users under 16 to curb unsolicited messages to teenagers. (Source: Ivan Mehta, TechCrunch)
  • Ring, owned by Amazon, announced restrictions on police access to consumer surveillance camera footage, requiring warrants for retrieval. (Source: Matt Day, Bloomberg)
  • European artists are contemplating legal action against Midjourney and other AI companies for allegedly utilizing a list of 16,000 performers to train AI models. (Source: James Tapper, The Guardian)
  • Wikipedia faced pressure from the Russian government, resulting in the platform being temporarily locked in Russia. (Source: Noam Cohen, Bloomberg)

Industry

Despite YouTube’s continued dominance in the demographic, recent research indicates that children are spending 60% more time on TikTok compared to YouTube. (Source: Sarah Perez, TechCrunch)

  • Apple is reportedly intensifying efforts to integrate relational AI into smartphones. (Source: Michael Acton, Financial Times)
  • Google Ads conversations are now powered by Google Gemini, facilitating streamlined Search strategy creation for marketers. (Source: Aisha Malik, TechCrunch)
  • The Circle to Search feature on Google Pixel 8 and 8 Pro devices enables users to access highlighted information conveniently. Additionally, users can measure body temperature using the built-in sensor. (Source: Chris Welch, The Verge)
  • Google’s AI art generator Lumiere excels in producing various objects, particularly animals in whimsical scenarios. (Source: Benj Edwards, Ars Technica)
  • Hugging Face’s AI software for startups will now be hosted on Google Cloud, offering open-source developers enhanced accessibility. (Source: Julia Like, Bloomberg)
  • Microsoft briefly achieved a market cap of $3 trillion, becoming the second company after Apple to reach this milestone. (Source: Ryan Vlastelica, Bloomberg)
  • Microsoft laid off 1,900 employees at Activision Blizzard and Xbox, with Blizzard’s chairman, Mike Ybarra, departing the company. (Source: Tom Warren, The Verge)
  • OpenAI is reducing API exposure costs, releasing new models, and introducing the GPT-4 Turbo preview model to combat “laziness.” (Source: Devin Coldewey, TechCrunch)
  • BeReal is introducing a feature allowing brands and personalities to register as “RacialBrands” and “RearealPeople.” (Source: Amanda Silberling, TechCrunch)
  • Twitch is revising its payment structure for creators, transitioning to flat-rate payments for Prime Gaming subscriptions, potentially impacting creators’ earnings. (Source: Ash Parrish, The Verge)
  • A significant portion of top-right-wing media outlets do not block web crawlers used by AI companies for data scraping, with over 88% allowing access. (Source: Kate Knibbs, WIRED)
  • Ads in numerous applications are reportedly tracking users’ physical locations, interests, and family details, raising concerns about privacy violations.
  • Streaming platforms hosting pirated content are reportedly witnessing a surge in profits, with close to 90% profit margins and annual revenues of approximately $2 billion. (Source: Thomas Buckley, Bloomberg)

Insightful Comments

For daily updates on new content, follow Casey’s Instagram stories. .
(Listen to the podcast here: [Link])


(Listen to the podcast here: [Link])


(Listen to the podcast here: [Link])

Visited 1 times, 1 visit(s) today
Last modified: January 26, 2024
Close Search Window
Close