Written by 4:09 am AI, AI problems, Latest news

### Outrage among Taylor Swift Fans over AI-Generated Nude Images

(NewsNation) — Taylor Swift fans are calling for lawmakers to take action after fake, nude and sexu…

Taylor Swift’s supporters are calling on legislators to take action following the emergence of fake, slim, and sexually explicit images of the singer, purportedly generated using artificial intelligence, that surfaced online this year.

As reported by The Verge, one of the most prominent instances of these photos on X, the social media platform formerly known as Twitter, garnered over 45 million views, 24,000 shares, and “hundreds of thousands” of likes and bookmarks. Despite the continued circulation of these images on Friday night, verified users who posted them were banned from X.

This is not the first time Swift has been targeted by AI; previously, deepfake technology was used to create a fake advertisement featuring Swift and the renowned cookware brand Le Creuset to deceive individuals.

NewsNation has reached out to a spokesperson for Swift for comment.

Images generated by algorithms depicting Taylor Swift are currently circulating.

Earlier this year, fake images of Swift started circulating online.

According to a report from 404media, these images were shared on a Telegram group “dedicated to aggressive images of women.” The images were produced using a diffusion design, an AI-powered system accessible through more than 100,000 programs and publicly available models, as reported by The New York Times. A security firm named Reality Defender claimed to have identified this with 90% certainty.

The pen market might see changes due to the Kat Von D tattoo experiment.

Individuals supporting “Protect Taylor Swift” and others affected by deepfake videos on Twitter have condemned the dissemination of these images. In response to the surge of explicit content about Swift, some fans have shared numerous images of her singing or performing.

Regardless of her financial status, it is deemed unacceptable to create an AI-generated nude image of a woman and engage in sexual harassment, according to a source. Such behavior is abhorrent and should be considered illegal.

Sharing what is referred to as false media, “chemically altered, manipulated, or out-of-context media that may confuse or mislead individuals and lead to harm,” is prohibited by X’s policies. However, content utilizing deceptive means on the internet must meet specific criteria to be removed.

According to X’s guidelines, the media, including images, videos, music, or GIFs, may contain:

  • Altered, manipulated, or fabricated in a significant or deceptive manner
  • Shared in a “false environment” or in a misleading manner
  • Likely to cause confusion on public issues, impact public safety, or result in severe harm

X Safety announced on social media that their team is taking necessary actions against the accounts responsible for posting such content and consistently removing all identified Swift images.

As stated in the article, “we are closely monitoring the situation to ensure prompt resolution of any further violations and removal of the content.”

The Rise of AI and Its Misuse

The escalating issue arises from the rapid advancement of artificial intelligence. Many individuals were optimistic about the capabilities of AI technologies such as ChatGPT and Bard, which could generate literature, art, and computer code.

Microsoft’s CEO, Satya Nadella, mentioned that the global technologies being integrated into their products will be “booted.”

Nadella expressed his enthusiasm about AI becoming a general-purpose technology that drives economic progress. Business leaders argue that it can assist people in complex tasks and managing routine work. However, the emerging technology may also pose a threat to employment.

Boeing leak: The company is accountable for the incident involving the door plug.

Concerns about AI were raised by college officials starting in 2023, with some institutions blocking ChatGPT due to fears of academic dishonesty.

AI could also have social implications. Before the New Hampshire primary election, an AI-powered phone impersonated President Joe Biden and urged voters not to support him. Citizens are also being deceived by fraudulent calls on a regular basis.

Some individuals utilize AI to create fake images and videos of real people; according to a report by independent researcher Genevieve Oh, published with The Associated Press in 2023, over 143,000 new fake videos were shared online.

Unauthorized deepfake pornography has become a significant issue, particularly for women, and the problem is expected to worsen with the development of new AI tools.

According to Adam Dodge, the founder of EndTAB, a nonprofit organization providing education on technology-enabled abuse, “the reality is that the technology will continue to thrive, evolve, and become as simple as pushing a button.” As long as this continues, people will undoubtedly exploit this technology to harm others, primarily through online sexual violence, fake pornography, and fabricated nude images.

Last summer, Westfield High School in New Jersey encountered a similar issue, although the school administration was not informed until October. It was discovered that one or more students had used AI to create pornographic images of fellow students, which were then shared on Snapchat. This incident outraged families and prompted a police investigation.

The Norfolk child’s Caribbean vacation ended in horror when the cruise ship docked in Mexico.

So, what can be done to address this issue?

Hoaxes and fake images underscore the “urgent need for a new area of legal practice,” according to Evan Nierman, CEO of Red Banyan, in a statement from December. Nierman highlighted numerous unresolved issues, such as determining ultimate responsibility for AI misuse: the company or the individual who created it?

According to Nierman in The Daily Business Review, celebrities risk having their personal brands infringed upon by the unauthorized use of their images and voices. “If AI deception targets ordinary individuals without the financial resources of the rich and famous and embroils them in legal disputes over videos or music that seem authentic but are AI-generated,” they may find themselves in dire straits.

Regarding AI-generated images, Paula Brillson, managing counsel for Digital Law Group PLLC, mentioned that illicit intrusion, false light, and identity theft are all potential privacy violations.

She advised individuals who have been victimized to contact an attorney as the first step. “If your image has been misappropriated or used inappropriately, you may be entitled to damages (including lost profits) for invasion of privacy, right of publicity violation, or defamation.”

If you only report the fake site(s) to the platform operators (Facebook, Instagram, etc.), you may end up playing a game of whack-a-mole with the infringers, according to Brillson. Offenders often create alternative accounts. Brillson recommended that creators of original works, such as photos, artwork, or music, register copyrights or utilize modern image fingerprinting techniques to prevent plagiarism.

Taylor Swift’s net worth reportedly exceeds $1 billion, according to analysts.

One prominent figure, Scarlett Johansson, filed a lawsuit against Lisa AI: 90s Magazine & Avata, an image-generating game that used her name and likeness in a website advertisement. Her representatives informed Variety that her legal team handled the case appropriately. Following a lawsuit against OpenAI for copyright infringement, notable artists gained recognition for asserting that the organization used their works to train its ChatGPT AI model without consent.

Combatting Deepfake AI

Has this situation prompted changes in artificial laws? Swift and lawmakers certainly hope so.

Rep. Yvette Clarke, D-NY, expressed concern on X that what happened to Taylor Swift is not new. Individuals have been victims of deepfakes for years without their consent. Due to advancements in AI, creating deepfakes has become easier and more affordable. Clarke urged bipartisan cooperation to address this issue.

According to USA TODAY, only 10 states have enacted laws prohibiting deepfake creation. There are currently no federal regulations governing this practice.

Rep. Joe Morelle, D-NY, described the dissemination of Swift images as “appalling” and stated that it happens to women daily.

He has advocated for making sexual exploitation a federal crime with the Protecting Deepfakes of Intimate Images Act. Morelle stated this in an online announcement. The proposed legislation would criminalize the distribution of fake pornography without consent and provide additional legal remedies for those affected.

In Tennessee, Democratic state representative Jason Powell of Nashville introduced a bill that would classify images “created or altered” by AI or other electronic editing tools showing someone’s intimate body parts as a form of indecent exposure, as reported by NewsNation affiliate WKRN.

Francesca Mani, 14, a student at Westfield High School, is collaborating with lawmakers to push for AI regulations. Morelle and Rep. Tom Kean, Jr. of New Jersey joined forces on the AI Labeling Act of 2023, which would mandate disclosures for AI-generated content.

During a press conference, Morelle remarked, “Imagine the horror of receiving intimate images that look exactly like you—or your daughter, your wife, or your sister—and not being able to prove they are fake.” He expressed astonishment that algorithmic pornography is not already a federal offense, given its exploitative, abusive, and harmful nature.


This article includes contributions from The Associated Press.

Visited 2 times, 1 visit(s) today
Tags: , , Last modified: March 24, 2024
Close Search Window
Close