Written by 9:17 am Deepfakes, Uncategorized

### Escalating Concerns: AI’s Deceptive Potential Amplified by Deepfakes in Israel-Hamas Conflict

The war in Gaza has painfully illustrated AI’s potential as a propaganda tool, used to create lifel…

WASHINGTON (AP) — Amid the scenes of destroyed homes and streets in Gaza, certain images have left a lasting impact: Abandoned, bloodied infants.

These particular images, viewed extensively online since the commencement of the conflict, are actually deepfakes generated through artificial intelligence. Upon closer inspection, subtle hints such as oddly curled fingers or unnaturally shimmering eyes reveal the telltale signs of digital manipulation.

The intended outrage evoked by these images, however, is undeniably genuine.

Throughout the Israel-Hamas conflict, photographs have starkly demonstrated AI’s potential as a tool for propaganda, capable of producing realistic depictions of devastation. Since the onset of the war, digitally altered images disseminated on social media have been utilized to propagate false narratives regarding the accountability for casualties or to deceive individuals about non-existent atrocities.

While the majority of misinformation circulating online about the conflict did not necessitate AI for its creation and originated from more traditional sources, technological advancements are rapidly emerging with minimal oversight. This has underscored the stark reality of AI’s potential to evolve into another form of weaponry, offering a glimpse into its implications for future conflicts, elections, and significant events.

Jean-Claude Goldenstein, CEO of CREOpoint, a tech company in San Francisco and Paris specializing in AI-driven claim validation, anticipates a worsening landscape of digitally manipulated content. He emphasizes that with generative AI, the escalation in falsified pictures, videos, and audio will be unprecedented.

Instances range from repurposed images from previous conflicts or disasters passed off as current, to entirely fabricated visuals generated by AI. One such example is an image of a crying baby amidst the wreckage of bombings that quickly gained traction in the initial stages of the conflict.

AI-generated content also includes videos depicting alleged Israeli missile strikes, tanks navigating through decimated neighborhoods, and families searching through debris for survivors.

These falsified depictions often aim to elicit strong emotional responses by featuring the bodies of infants, children, or families. During the early days of the conflict, advocates on both sides of the conflict accused the other of harming children and infants, utilizing deepfake images of crying babies as photographic ‘evidence’ that was swiftly embraced as proof.

Propagandists adeptly exploit individuals’ deepest emotions and fears to craft such images, notes Imran Ahmed, CEO of the Center for Countering Digital Hate. Whether portraying a deepfake infant or an authentic image from a different conflict, the emotional impact on viewers remains potent.

The more shocking the image, the more likely it is to be remembered and shared, inadvertently perpetuating the dissemination of misinformation.

Similar deceptive AI-generated content emerged following Russia’s invasion of Ukraine in 2022. For instance, an altered video purported to show Ukrainian President Volodymyr Zelenskyy issuing a surrender order. Despite being easily debunked, such claims persist, highlighting the enduring nature of misinformation.

With each new conflict or election cycle, disinformation purveyors seize opportunities to showcase the latest AI capabilities. This trend has prompted concerns among AI experts and political analysts regarding the forthcoming major elections in various countries, including the U.S., India, Pakistan, Ukraine, Taiwan, Indonesia, and Mexico.

The prospect of AI and social media being harnessed to disseminate falsehoods to U.S. voters has prompted bipartisan alarm among lawmakers in Washington. U.S. Rep. Gerry Connolly of Virginia emphasized the imperative for the nation to invest in developing AI tools to counteract deceptive AI technologies.

Numerous startup tech firms worldwide are developing innovative programs to detect deepfakes, apply watermarks to images for origin verification, and scrutinize text for inserted dubious claims generated by AI.

Maria Amelie, co-founder of Factiverse, a Norwegian company pioneering AI content analysis, underscores the importance of verifying online content and combating misinformation. Such tools hold significant value for educators, journalists, financial analysts, and others seeking to identify inaccuracies, bias, or fraud in content.

While these technologies show promise, those exploiting AI for deceit often remain ahead of detection efforts. David Doermann, a computer scientist formerly with the Defense Advanced Research Projects Agency, stresses the necessity for enhanced technology, regulations, industry standards, and investments in digital literacy programs to address the multifaceted challenges posed by AI-driven disinformation.

Doermann emphasizes that the solution lies beyond mere detection and removal of deceptive content, necessitating a comprehensive approach to tackle the evolving landscape of misinformation.

Visited 1 times, 1 visit(s) today
Last modified: February 7, 2024
Close Search Window
Close