By ALI SWENSON
NEW YORK — The Federal Communications Commission on Thursday outlawed robocalls that contain voices generated by artificial intelligence, a decision that sends a clear message that exploiting the technology to scam people and mislead voters won’t be tolerated.
The unanimous ruling targets robocalls made with AI voice-cloning tools under the Telephone Consumer Protection Act, a 1991 law restricting junk calls that use artificial and prerecorded voice messages.
The announcement comes as New Hampshire authorities are advancing their investigation into AI-generated robocalls that mimicked President Joe Biden’s voice to discourage people from voting in the state’s first-in-the-nation primary last month.
Effective immediately, the regulation empowers the FCC to fine companies that use AI voices in their calls or block the service providers that carry them. It also opens the door for call recipients to file lawsuits and gives state attorneys general a new mechanism to crack down on violators, according to the FCC.
The agency’s chairwoman, Jessica Rosenworcel, said bad actors have been using AI-generated voices in robocalls to misinform voters, impersonate celebrities and extort family members.
RELATED: Fake Biden robocalls in New Hampshire traced to Texas company, criminal investigation underway
“It seems like something from the far-off future, but this threat is already here,” Rosenworcel told The Associated Press on Wednesday as the commission was considering the regulations. “All of us could be on the receiving end of these faked calls, so that’s why we felt the time to act was now.”
Under the consumer protection law, telemarketers generally cannot use automated dialers or artificial or prerecorded voice messages to call cellphones, and they cannot make such calls to landlines without prior written consent from the call recipient.
The new ruling classifies AI-generated voices in robocalls as “artificial” and thus…
Read the full article here