The recent unanimous decision by the Federal Communications Commission (FCC) designated AI-generated voices as “artificial” under the Telephone Consumer Protection Act, effectively prohibiting their use. This ruling followed an incident in New Hampshire where an AI-generated voice impersonating President Biden circulated before the state’s primary.
While experts view the FCC’s action as a positive initial step in combatting deceptive AI-generated content, they emphasize the need for a more comprehensive approach. Julia Stoyanovich, an associate professor at New York University’s Tandon School of Engineering, highlighted the importance of considering AI-generated media as a whole and implementing broader regulations to address its use in various contexts.
The Telephone Consumer Protection Act empowers the FCC to penalize robocallers and block calls from carriers involved in illegal robocalls, including those utilizing AI-generated voice cloning. FCC Chair Jessica Rosenworcel warned about the growing threat of AI technology in facilitating scams and emphasized the need to protect consumers from fraudulent activities.
Efforts are now underway to push the Federal Election Commission (FEC) to supplement the FCC’s actions and intensify regulations concerning AI usage. Advocates, including Public Citizen, have urged for stricter measures to safeguard against AI-related deception, particularly in the lead-up to the 2024 election.
Despite the FCC’s recent move, concerns persist regarding the unregulated use of AI-generated images and videos in political campaigns. As these materials become more prevalent, there is a growing call for enhanced oversight to ensure the integrity of elections and protect citizens from misinformation and manipulation.