Written by 8:19 pm Generative AI

### Reporting FCC’s Ban on AI-Faked Robocall Voices

Singer Taylor Swift and President Joe Biden’s voices are among those that have been impersona…

President Joe Biden was mimicked by a phone call in January, instructing Liberals not to participate in the New Hampshire primary. Following a ruling by the Federal Communications Commission that deems calls utilizing artificially generated voices as illegal, states now possess a new mechanism to pursue the individuals responsible for such actions, effective immediately.

On February 6, the New Hampshire attorney general disclosed that a Texas-based company was identified as the originator of the fabricated Biden robocalls, prompting an ongoing legal inquiry.

Addressing the issue, FCC Chairwoman Jessica Rosenworcel emphasized the misuse of AI-generated voices in unsolicited robocalls, stating, “Bad actors are impersonating celebrities, spreading misinformation, and preying on vulnerable individuals.” She further added, “We are alerting the perpetrators behind these robocalls that they are under scrutiny.” The new ruling equips State Attorneys General with enhanced tools to combat these fraudulent activities and safeguard the public from deception and scams.

The development of artificial intelligence has garnered significant attention, particularly concerning its capability to replicate the voices of prominent figures for malicious purposes. The controversy surrounding AI-generated content escalated in 2023 when a TikTok user named copywriter stirred controversy with a track called “Heart on My Sleeve,” mimicking the voices of Drake and The Weeknd using AI-generated lyrics. Moreover, an incident involving Taylor Swift’s AI-generated voice endorsing Le Creuset cookware without her actual involvement raised concerns about the misuse of well-known personalities’ voices.

Although artificial voice synthesis is now prohibited by law, state attorneys general are empowered to employ artificial voices to combat scam calls effectively. This decision by the FCC is expected to provide states with additional legal backing to combat fraudulent activities.

In a proactive move, the FCC has been exploring the use of AI to mimic voices for fraudulent robocalls, leveraging the Telephone Consumer Protection Act to address this issue. This legislation empowers consumers and businesses to take legal action against robocallers and grants the FCC authority to penalize carriers facilitating illicit calls.

The FCC’s recent initiatives aim to harness artificial intelligence for positive purposes, such as aiding in the identification and prevention of illicit calls. By collaborating with law enforcement agencies nationwide, the FCC is intensifying efforts to combat improper robocalls, fostering partnerships that enhance consumer and business protection across the country.

To report unwanted robocalls, consumers are encouraged to submit complaints through the FCC’s online platform, irrespective of whether AI-generated voices were used. Additionally, consumers can utilize call-blocking and labeling tools to mitigate the impact of such calls, with various resources available to assist in this endeavor.

Visited 2 times, 1 visit(s) today
Tags: Last modified: February 9, 2024
Close Search Window
Close