Nonconsensual, AI-generated images and video appearing to show singer Taylor Swift engaged in sex acts flooded X, the site formerly known as Twitter, last week, with one post reportedly viewed 45 million times before it was taken down. The deluge of AI generated “deepfake” porn persisted for days, and only slowed down after X briefly banned search results for the singer’s name on the platform entirely. Now, lawmakers, advocates, and Swift fans are using the content moderation failure to fuel calls for new laws that clearly criminalize the spread of AI-generated, deepfakes sexual in nature online.
How did the Taylor Swift deepfakes spread?
Many of the AI-generated Swift deepfakes reportedly originated on the notoriously misogynistic message board 4chan and a handful of relatively obscure private Telegram channels. Last week, some of those made the jump to X where they quickly started spreading like wildfire. Numerous accounts flooded X with the deepfake material, so much so that searching for the term “Taylor Swift AI,” would serve the images and videos. In some regions, The Verge notes, that same hashtag was featured as a trending topic, which ultimately amplified the deepfakes further. One post in particular reportedly received 45 million views and 24,000 reposts before it was eventually removed. It took X 17 hours to remove the post despite it violating the company’s terms of service.
X did not immediately respond to PopSci’s request for comment.
Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content. Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them. We’re closely…
— Safety (@Safety) January 26, 2024
With new iterations of the deepfakes proliferating, X moderators stepped in on Sunday and blocked search results for “Taylor Swift” and “Taylor Swift AI” on the platform. Users who searched for the pop star’s name on the platform for several days reportedly saw an error message reading “something went wrong.” X officially addressed the issue in a tweet last week, saying it was actively monitoring the situation and taking “appropriate action” against accounts spreading the material.
Swift’s legion of fans took matters into their own hands last week by posting non-sexualized images of the pop star with the hashtag #ProtectTaylorSwift in an effort to drown out the deepfakes. Others banded together to report accounts that uploaded the pornographic material. The platform officially lifted the two-day ban on Swift’s name Monday.
“Search has been re-enabled and we will continue to be vigilant for any attempt to spread this content and will remove it if we find it,” X Head of Business Joe Benarroch, said in a statement sent to the Wall Street Journal.
Why did this happen?
Sexualized deepfakes of Swift and other celebrities do make appearances on other platforms, but privacy and policy experts said X’s uniquely hands-off approach to content moderation in the wake of its acquisition by billionaire Elon Musk were at least partly to blame for the event’s unique virality. As of January, X had reportedly laid off around 80% of engineers working on trust and safety teams since Musk took the helm.
That gutting of the platform’s main line of defenses against violating content makes an already difficult content moderation challenge even more difficult, especially during viral moments where users flood the platform with more potentially violating content. Other major tech platforms run by Meta, Google, and Amazon have similarly downsized their own trust and safety teams in recent years which some fear could lead to an uptick in misinformation and deepfakes in coming months.
Trust and safety workers still review and remove some violating content at X, but the company has openly relied more heavily on automated moderation tools to detect those posts since Musk took over. X is reportedly planning on hiring 100 additional employees to work in a new “Trust and Safety center of excellence” in Austin, Texas later this year. Even with those additional hires, the total number of trust and safety staff will still be a fraction of what it was prior to layoffs.
AI deepfake clones of prominent politicians and celebrities have heightened anxieties around how tech could be used to spread misinformation or influence elections, but nonconsensual pornography remains the dominant use case. These images and videos are often created using lesser known, open source generative AI tools since popular models like OpenAI’s DALL-E explicitly prohibit sexually explicit content. Technological advancements in AI and wider access to the tools, in turn, have contributed to an increased amount of sexual deepafkes on the web.
Researchers in 2021 estimated that somewhere between 90 and 95% of deepfakes living on the internet were of nonconsensual sexual porn, the overwhelming majority of which targeted women. That trend is showing no signs of slowing down. An independent researcher speaking with Wired recently estimated there was more deepfake porn was uploaded in 2023 than all other years combined. AI generated child sexual abuse material, some of which are created without real human images, are also reportedly on the rise.
How Swift’s following could influence tech legislation
Swift’s tectonic cultural influence and particularly vocal fan base are helping reinvigorate years-long efforts to introduce and pass legislation explicitly targeting nonconsensual deepfakes. In the days since the deepfake material began spreading, major figures like Microsoft CEO Satya Nadella and even President Joe Biden’s White House have weighed in, calling for action. Multiple members of Congress, including Democratic New York representative Yvette Clarke and New Jersey Republican representative Tom Kean Jr. released statements promoting legislation that would attempt to criminalize sharing of non consensual deepfake porn. Kean Jr. One of those bills, called the Preventing Deepfakes of Intimate Images Act, could come up for a vote this year.
Deepfake porn and legislative efforts to combat it aren’t new, but Swift’s sudden association with the issue could serve as a social accelerant. An echo of this phenomenon occurred in 2022 when the Department of Justice announced it would launch an antitrust investigation into Live Master after its site crumbled under the demand of presale tickets for Swift’s “The Eras” tour. The incident resparked some music fans’ long-held grievances towards Live Nation and its supposed monopolistic practices, so much so that executives from the company were forced to attend a Senate Judiciary Committee hearing grilling them on their business practices. Multiple lawmakers made public statements supporting “breaking up” Live Nation-Ticketmaster.
Whether or not that same level of political mobilization happens this time around with deepfakes remains to be seen. Still, the boost in interest for laws reigning in AI’s darkest use cases following the Swift deepfake debacle points to the power of having culturally relevant figureheads attach their names to otherwise lesser known policy pursuits. That relevance can help jump start bills to the top of agendas when, otherwise, they would have been destined for obscurity.