Written by 7:56 am Deepfakes, Latest news

– Artificial Entities Reach Consensus on Curbing Vote “Deepfakes” Yet Stop Short of Prohibition

Google, Microsoft, OpenAI, Meta, TikTok and Adobe developed an accord to respond to the proliferati…

Leading AI companies are gearing up to endorse an “agreement” that commits to advancing technology for detecting, classifying, and managing AI-generated visuals, videos, or audio clips aimed at deceiving voters ahead of crucial elections in multiple countries this year.

As per a draft obtained by The Washington Post, the pact, initiated by Google, Microsoft, Meta (formerly Facebook), OpenAI, Adobe, TikTok, does not explicitly prohibit deceptive social AI content. Twitter, now known as X, opted out of signing the accord.

The document emphasizes that AI-generated content, often originating from the entities’ resources and disseminated on their platforms, poses election integrity risks. It suggests measures to address these risks, such as tagging suspected AI content and raising public awareness about AI-related threats.

The agreement highlights that the deliberate dissemination of fabricated AI election materials can mislead the public, jeopardizing democratic processes’ integrity.

While deepfakes, AI-manipulated visuals, have been in existence, their quality has significantly improved, blurring the line between authentic and fraudulent media. Moreover, the tools required to create deepfakes are now more accessible, simplifying their production process.

Generating professional AI imagery has become swift and effortless. But should one engage in such practices?

AI-generated content has increasingly infiltrated election campaigns worldwide. Instances include a Ron DeSantis ad emulating Donald Trump’s voice, Imran Khan using AI to deliver speeches while detained, and fake calls discouraging voting in the New Hampshire primary using an AI-generated Biden voice.

Stakeholders, including regulators, AI experts, and advocates, have urged tech giants to curb the spread of deceptive election content. This new agreement mirrors a voluntary commitment made by these companies and other stakeholders in July, following a White House summit, pledging to identify and label false AI content on their platforms. The signatories agree to educate users about deceptive AI content and maintain transparency regarding their deepfake detection efforts.

The process through which AI, trained on infant behavior, sheds light on language acquisition.

These tech behemoths have instituted individual policies addressing misleading AI content. TikTok prohibits the use of AI-generated content mimicking public figures for political or commercial endorsements. Meta mandates social advertisers to disclose AI usage in ads, while YouTube, owned by Google, requires creators to label AI-generated content.

Despite showcasing watermarking technologies, Google has not mandated their use for identifying AI content. Adobe, renowned for restricting AI manipulation, faced an influx of fabricated images related to the Gaza War on its stock photo platform.

Microsoft asserts the security of its AI, yet questions linger on its practice of digitally altering women’s appearances.

Although endeavors to establish a robust system for identifying and classifying AI content on social platforms are ongoing, challenges persist.

Visited 3 times, 1 visit(s) today
Tags: , Last modified: February 28, 2024
Close Search Window
Close