More regulation regarding AI is coming down the pipeline from the federal government, this time concerning the use of artificially generated voice impersonations in robocalls. The Federal Communications Commission (FCC) issued a ruling that officially makes it illegal to use artificially generated voices in robocalls,
In a statement, FCC Chairwoman Jessica Rosenworcel said the ruling is largely focused on stopping people who use artificial intelligence in robocalls for malicious purposes. "Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters," Rosenworcel said in a statement. "We're putting the fraudsters behind these robocalls on notice."
So how did we get here and what does it mean for AI?
The Origin Of The Robocall Crackdown
You don't have to look too far in the past to see why the FCC so quickly and unanimously jumped to a ruling regarding the use of artificially generated voices in robocalls. Several weeks ago and days before the presidential primary in New Hampshire, voters in that state received robocalls using an artificially generated voice meant to imitate U.S. President Joe Biden. In the call, the fake Biden urged New Hampshire voters to not vote.
New Hampshire attorney general John M. Formella said the state ultimately traced the origin of the calls to a Texas company. That company did not respond to the allegations, but further records show it received money from the Republican Party as recently as 2022. In addition to the fake voice, the calls were spoofed to make it look as if they were coming from Kathleen Sullivan, a former chairwoman of the New Hampshire Democratic Party. New Hampshire has a state law that prohibits anybody from preventing or deterring somebody from voting.
This is, of course, not the first use of AI-generated voices being used en masse. There are even some popular creators who create skits using spoofed voices of famous people, including politicians. But it's certainly the most high profile instance of an organized attempt to undermine democracy via intentionally misleading use of artificial intelligence.
Growing Calls For Protection Amid High-Profile Examples Of Abuse
The United States has been notoriously slow to address major concerns around the use of generative AI and it's leading to some serious consequences. Now, several high-profile instances of AI being used for nefarious purposes has led to increased pressure from consumers and politicians alike.
A flood of artificially generated pornographic images of Taylor Swift hit the social media platform X a few weeks ago, leading to the platform to do perhaps the least effectual maneuver ever and ban searches of "Taylor Swift" in the platform before eventually removing the images entirely. And instances of this type of content only continues to grow.
While many platforms do have policies banning posting "synthetic" or "artificial" content in these instances, the reality is that the platforms do little to enforce the policies. Unless, of course, the fan base for the biggest star in the world is on your heels. The ordeal has led to renewed calls from some politicians to ban nonconsensual deepfakes entirely.
How The Robocall Ban Could Affect Content Creators
It's safe to say the FCC's unanimous decision to ban AI in robocalls won't upset many people. We wouldn't be surprised if most Americans welcomed more regulation of all these random, spoofed, and spam calls. But the move to ban the use of AI in calls at a fairly swift pace suggests we may see more immediate action against these kinds of unwarranted use of generative AI. Or that, at the very least, the government could act swiftly.
Some experts expect the U.S. to adopt a sort of grading system that rates the different types of AI and risks they may pose. This would be similar in effect to a system in place in the European Union.
Speaking of the EU, that organizations newly adopted AI Act will see sweeping changes across 2024 designed to protect consumers and creators in ways generative AI companies have so far avoided. One interesting caveat — open-source AI companies are exempt from many of the broad restrictions placed on companies like OpenAI and Google.
But it also ultimately means that there may soon be penalties for those who try to mislead or deceive an audience. As platforms like TikTok require users to identify when a creator uses AI and government bodies flex more muscle around the use of generative content, we may soon finally see some accountability in the space.