CNN New York—
In New York this week, AI-generated images of the country’s most prominent celebrity in sexually explicit poses circulated on social media, highlighting the dark side of mainstream artificial intelligence technology by creating authentic yet harmful visuals.
The platform X, formerly recognized as Twitter, became the primary hub for the dissemination of manipulated images featuring Taylor Swift. These photos, portraying the singer in sexually suggestive positions, garnered millions of views before being removed from social media platforms. Nonetheless, the internet’s nature ensures that such content may resurface on less regulated channels.
When approached for a statement, Taylor Swift’s representative chose not to comment on the matter.
Posting “synthetic, manipulated, or out-of-context media that may confuse or deceive individuals and result in harm” is explicitly prohibited by X’s guidelines, as well as those of other major social media networks.
Despite CNN’s request for a response, the company remained silent on the issue.
As the United States approaches a presidential election year, concerns are mounting regarding the potential misuse of deceptive AI-generated visuals to influence propaganda and disrupt democratic processes.
Ben Decker, the director of the research firm Memetica, expressed to CNN, “This incident exemplifies the exploitation of AI for malicious purposes without adequate safeguards to protect public discourse.”
Decker emphasized the increasing trend of utilizing advanced AI tools on social media to create harmful content targeting various public figures.
He noted the inadequacy of effective content monitoring policies by social media platforms.
For instance, X has significantly downsized its content moderation team and relies heavily on user reports and automated systems. (X is currently under investigation in the EU for its content moderation practices.)
Insiders revealed to CNN that Meta has also made cuts to teams combating disinformation and coordinated online harassment, raising concerns ahead of the pivotal 2024 elections in the US and globally.
The source of the Taylor Swift-related images remains unidentified. While some images were traced back to platforms like Instagram and Reddit, X encountered substantial challenges in managing such content.
This incident coincides with the emergence of AI generation tools like ChatGPT and DALL-E, alongside a vast array of unregulated, not-safe-for-work AI models on open-source platforms, as highlighted by Decker.
Decker cautioned that without comprehensive discussions among stakeholders—AI companies, social media platforms, authorities, and civil society—such problematic content will persist and proliferate.
However, Decker suggested that Swift’s involvement could draw attention to the escalating concerns surrounding AI-generated visuals. Following an outage on Ticketmaster before her Eras Tour concert in 2022, Swift’s devoted fan base, known as “Swifties,” expressed outrage online, prompting legislative actions against unfair ticketing practices.
Decker speculated that a similar response might be triggered by harmful AI-generated images.
He asserted that legislators and tech firms may take preemptive measures to avoid backlash from influential figures like Taylor Swift, who possess significant online influence.
The technology previously utilized for “revenge porn,” involving the unauthorized online posting of explicit images, has gained prominence due to the offensive images of Swift.
Currently, nine US states have enacted laws prohibiting the creation or dissemination of non-consensual digitally altered images resembling real individuals.