Written by 5:11 pm Deepfakes, Uncategorized

### AI’s Deceptive Might: Gaza War Deepfakes Unleash Terror

Pictures from the Israel-Hamas war have vividly and painfully illustrated AI’s potential as a…

Among the images of the devastated homes and streets of Gaza, certain images stand out for their sheer horror: Abandoned infants covered in blood.

Since the commencement of the conflict, these images have circulated widely online, but they are not authentic; they are deepfakes generated using artificial intelligence. Upon closer inspection, subtle clues such as oddly curled fingers or unnaturally gleaming eyes reveal the digital manipulation.

Nevertheless, the outrage these images were intended to incite is undeniably real.

The images from the Israel-Hamas conflict have starkly demonstrated AI’s potential as a propaganda weapon, capable of producing realistic depictions of violence. Throughout the ongoing conflict, digitally altered images disseminated on social media have been utilized to make false assertions regarding the responsibility for casualties or to deceive individuals about non-existent atrocities.

While many of the misleading claims circulating online about the conflict did not necessitate AI for their creation and originated from more traditional sources, the rapid advancements in technology are leading to a proliferation of such deceptive content with minimal oversight. This has underscored the alarming potential for AI to evolve into a formidable weapon, offering a glimpse of what may transpire in future conflicts, elections, and significant events.

Jean-Claude Goldenstein, the CEO of CREOpoint, a technology company based in San Francisco and Paris specializing in AI-driven claim validation, anticipates a worsening scenario before any improvement. The company has compiled a database of the most widely circulated deepfake images emerging from Gaza, emphasizing the escalating impact that generative AI could have on manipulating pictures, videos, and audio.

Instances include repurposing photos from prior conflicts or disasters as current events and the creation of entirely new images using generative AI, such as a viral image depicting a crying infant amidst the aftermath of bombings in the conflict’s early stages.

AI-generated content also includes videos portraying purported Israeli missile attacks, military vehicles traversing through destroyed neighborhoods, and families searching through debris for survivors.

Many of these fabrications are tailored to elicit strong emotional responses by featuring the bodies of infants, children, or families. During the initial violent days of the conflict, supporters from both sides accused each other of harming children and infants, with deepfake images of wailing babies serving as supposed photographic evidence that quickly circulated as proof.

Propagandists behind these images excel at exploiting people’s deepest emotions and fears, as noted by Imran Ahmed, CEO of the Center for Countering Digital Hate. Whether it’s a deepfake infant or an authentic image of a child from a different conflict, the emotional impact on viewers remains potent.

The dissemination of abhorrent images increases the likelihood of user retention and unwitting sharing, perpetuating the spread of disinformation.

Following Russia’s invasion of Ukraine in 2022, similarly deceptive AI-generated content emerged. For instance, a manipulated video purported to show Ukrainian President Volodymyr Zelenskyy issuing a surrender order. Despite the ease of debunking such misinformation, these claims persist, underscoring the enduring threat posed by AI-driven falsehoods.

With upcoming major elections in several countries, concerns are mounting among AI experts and political analysts regarding the potential exploitation of AI and social media to disseminate misinformation. Lawmakers in the United States, including U.S. Rep. Gerry Connolly, emphasize the necessity of investing in AI tools to counter the spread of deceptive content.

Various tech startups worldwide are developing programs to detect deepfakes, apply watermarks to images for verification, or analyze text to identify inserted falsehoods. These tools are crucial for educators, journalists, financial analysts, and others seeking to combat misinformation, plagiarism, or fraud.

While these advancements show promise, those propagating lies through AI often stay ahead of detection efforts. David Doermann, a computer scientist formerly with the Defense Advanced Research Projects Agency, stresses the need for improved technology, regulations, industry standards, and digital literacy initiatives to address the challenges posed by AI-generated disinformation effectively.

In combating the proliferation of AI-driven falsehoods, a multifaceted approach encompassing technological innovation, regulatory frameworks, and public education is imperative to safeguard the integrity of information in the digital age.

Visited 2 times, 1 visit(s) today
Last modified: February 7, 2024
Close Search Window
Close