Written by 10:36 pm AI problems, AI Threat, Latest news, Politics

– False Biden Phone Triggers State Probe, Sparking AI Worries for 2024 Election

A fake voice message mimicking President Biden told New Hampshire residents not to vote in the prim…

The deceptive telemarketing scheme involving an individual impersonating President Joe Biden, which hindered residents from participating in Tuesday’s national primary, is currently under investigation by the New Hampshire attorney general’s office.

Preliminary findings suggest that the voice on the call, resembling President Biden, may have been artificially generated. The message falsely attributed to the manager of a political committee supporting President Biden’s write-in campaign for the New Hampshire Democratic Presidential Primary appears to have been “spoofed.”

The fraudulent robocall, disseminated on Sunday, falsely asserted that voting is only relevant in November, not during the primary. Furthermore, the recording suggested that Republicans would benefit from participating in the primary to support Donald Trump’s re-election efforts. The call displayed a specific New Hampshire Democrat’s phone number on caller ID screens, prompting an inquiry, as reported by NBC News.

The attorney general’s office is investigating the recipients of the call and the intended targets of this deceptive campaign.

To address the issue, individuals who received the call were advised to contact the Department of Justice Election Law System via email at [email protected], providing details such as the time and date of the call, its source, the content, and any other relevant information.

Regarding the risks associated with robocalls and artificial intelligence (AI), The Hill highlights the absence of adequate safeguards against the dissemination of misinformation, making the use of AI in elections a growing concern. The introduction of the BIAS Act by Democratic U.S. Sen. Edward Markey aims to mitigate AI bias by mandating federal agencies that utilize AI to establish a civil rights oversight body.

As the government grapples with regulating voting procedures for 2024, Meta and Google are collaborating on guidelines to enhance transparency in using relational AI for political advertisements. The emergence of advanced AI tools like OpenAI’s ChatGPT has raised concerns among politicians, including Markey, about the potential acceleration of misinformation dissemination, especially during critical events like primaries.

Meta now requires political advertisers to disclose the use of AI in ads featuring digitally altered realistic content intended to deceive viewers. Similarly, Google mandates that advertisers reveal whether their ads incorporate artificially generated or digitally altered images or videos of real individuals or events.

Visited 4 times, 1 visit(s) today
Last modified: January 25, 2024
Close Search Window
Close