Written by 8:44 pm AI Trend

### Elon Musk’s X to Recruit 100 Moderators Following Taylor Swift Incident

The San Francisco-based company formerly known as Twitter announced on Friday that it will build a …

Elon Musk’s company, X, has announced plans to hire 100 full-time employees to address child sexual exploitation concerns following the circulation of AI-generated images of Taylor Swift on social media. The San Francisco-based firm, previously known as Twitter, revealed its intention to establish a “trust and safety center” in Austin, Texas. This center will house “in-house agents” responsible for upholding content and safety regulations on the platform.

Despite not having a specific focus on children, X recognizes the importance of investing in measures to prevent offenders from using the platform for the distribution or engagement with Child Sexual Exploitation (CSE) content. Joe Benarroch, X’s head of business operations, emphasized the significance of these investments.

Elon Musk, who acquired Twitter for $44 billion in late 2022, faced criticism for reducing the headcount in the company’s trust and safety operations. The platform received backlash for hosting antisemitic and neo-Nazi content, leading to some advertisers withdrawing their support.

In response to the proliferation of pornographic deepfake images of Taylor Swift, X took action to block certain searches related to the singer. This temporary measure aimed to prioritize safety on the platform, prompting users to retry their search with a reassuring message in case of errors.

The circulation of AI-generated explicit images of Swift sparked outrage among her fan base, known as “Swifties,” who initiated a campaign to counter the negative content with positive images and the #ProtectTaylorSwift hashtag. Reality Defender, a group monitoring deepfakes, reported a surge in nonconsensual pornographic material depicting Swift across various platforms.

The emergence of these deepfakes, particularly on X, raised concerns about the abuse of AI image-generating technology. Ben Decker from Memetica highlighted ongoing efforts to produce explicit AI-generated images of celebrity women, including Swift, on fringe platforms.

As the prevalence of explicit deepfakes continues to rise, platforms like X are compelled to implement measures to address the dissemination of such content. The development and accessibility of AI technology have facilitated the creation of deepfakes, posing challenges for platforms to combat their spread effectively.

Legislation such as the Digital Services Act in the European Union aims to regulate content dissemination on online platforms, including provisions to address deepfakes. Companies creating deepfakes using AI systems will be required to disclose the artificial or manipulated nature of the content to users, pending final approvals of the Artificial Intelligence Act within the EU.

Visited 4 times, 1 visit(s) today
Last modified: January 30, 2024
Close Search Window
Close