The Federal Communications Commission plans to vote on making the use of AI-generated voices in robocalls illegal. The FCC said that AI-generated voices in robocalls have “escalated during the last few years” and have “the potential to confuse consumers with misinformation by imitating the voices of celebrities, political candidates, and close family members.”
FCC Chairwoman Jessica Rosenworcel’s proposed Declaratory Ruling would rule that “calls made with AI-generated voices are ‘artificial’ voices under the Telephone Consumer Protection Act (TCPA), which would make voice cloning technology used in common robocalls scams targeting consumers illegal,” the commission announced yesterday. Commissioners reportedly will vote on the proposal in the coming weeks.
A recent anti-voting robocall used an artificially generated version of President Joe Biden’s voice. The calls told Democrats not to vote in the New Hampshire Presidential Primary election.
An analysis by the company Pindrop concluded that the artificial Biden voice was created using a text-to-speech engine offered by ElevenLabs. That conclusion was apparently confirmed by ElevenLabs, which reportedly suspended the account of the user who created the deepfake.
FCC ruling could help states crack down
The TCPA, a 1991 US law, bans the use of artificial or prerecorded voices in most non-emergency calls “without the prior express consent of the called party.” The FCC is responsible for writing rules to implement the law, which is punishable with fines.
As the FCC noted yesterday, the TCPA “restricts the making of telemarketing calls and the use of automatic telephone dialing systems and artificial or prerecorded voice messages.” Telemarketers are required “to obtain prior express written consent from consumers before robocalling them. If successfully enacted, this Declaratory Ruling would ensure AI-generated voice calls are also held to those same standards.”
The FCC has been thinking about revising its rules to account for artificial intelligence for at least a few months. In November 2023, it launched an inquiry into AI’s impact on robocalls and robotexts.
Rosenworcel said her proposed ruling will “recognize this emerging technology as illegal under existing law, giving our partners at State Attorneys General offices across the country new tools they can use to crack down on these scams and protect consumers.
“AI-generated voice cloning and images are already sowing confusion by tricking consumers into thinking scams and frauds are legitimate,” Rosenworcel said. “No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls.”