States are cracking down on deepfakes ahead of the 2024 election

But they might be missing the most concerning threats.

March 27, 2024, 3:08 PM

The recording sounded an awful lot like President Joe Biden. In a robocall made to thousands of New Hampshire residents ahead of the state's presidential primary in January, an artificial-intelligence-generated dupe of the president's voice urged voters not to bother with the primary. NBC News later reported the audio was a complete fabrication created by a New Orleans street magician who said he was hired by Steve Kramer, a Democratic operative who has worked on Rep. Dean Phillips's presidential campaign.

There are still a lot of lingering questions about this scandal, but the biggest may be: How worried should we be about AI-generated fakery this election? In an attempt to get out in front of the problem, state legislators have been introducing and passing bans on deepfakes — media created using AI to impersonate politicians — around elections. Since January of last year, 41 states have introduced election-related deepfake bans, according to tracking by Public Citizen. Eight states have enacted laws regulating deepfakes, joining California and Texas, which in 2019 banned the use of deepfakes in elections. But these laws miss some of the AI threats experts say are most pressing and demonstrate how difficult it is to legislate this emerging technology while preserving First Amendment rights.

Deepfakes first emerged more than six years ago and — like so much of the internet — were originally used to generate porn. But the arrival over the last year of commercially available and easy-to-use AI platforms such as ChatGPT and DALL-E has made this technology much more accessible, creating new opportunities — and new threats. This is likely what spurred the sudden cascade of deepfake bans at the state level, said Daniel Weiner, the director of elections and government at the Brennan Center for Justice.

"Reasonably convincing deepfakes are easier for ordinary people who don't have a lot of resources or a lot of technical skill to create," Weiner said, adding that AI is a "threat amplifier" that has exacerbated existing risks to democracy, such as disinformation (risks that have also increased since 2020 for reasons unrelated to AI).

The new state laws vary in exactly how they approach the problem of misleading, AI-generated content around elections. A law enacted last week in Wisconsin, for instance, requires any campaign audio or video content that includes AI-generated media to include a clear disclaimer. A law enacted in New Mexico earlier this month also requires disclaimers, while making it a crime if a deepfake is used within 90 days of an election with the intent of altering the voting behavior of the electorate. (Something like the Biden deepfake might fall under this.)

Some lawmakers have argued laws like this are infringing on freedom of expression — so legislators sometimes have to thread the needle between protecting free speech and preventing misleading content from interfering with elections. For instance, a Georgia bill that passed the state House in February would make it a felony to "publish, broadcast, stream or upload" a deepfake within 90 days of an election with the intention of impacting the election or misleading voters — but it also specifically carves out exemptions for satire, parody, journalism and campaign ads so long as they include a disclaimer. The bill had bipartisan support in the state House, but some legislators still worried it infringed on freedom of speech, with one Republican state legislator saying it "threatens to erode the bedrock of freedom."

The carveouts written to thread that needle can also make the laws too specific and cause them to miss some of the other risks to democracy that AI creates, according to Josh Lawson, the director of AI and democracy at the Aspen Institute. While state and federal laws have primarily focused on deepfakes that impersonate politicians, experts like Lawson are more concerned about other types of deceptive content, such as using AI to generate mass texting campaigns that target voters with custom misinformation to, for example, tell them their polling location has changed. This kind of tactic predates AI (just ask any Canadian), but the new technology makes it easier and more precise.

During a recent briefing with the National Association of Secretaries of State, Lawson and other experts presented attendees with six AI risk scenarios that threaten democracy, and none of them had to do with politicians being impersonated. Instead, they included fake local news sites, AI-augmented phishing schemes to gain access to election systems, or voice cloning to imitate local election officials and disseminate false information to disrupt voting. "Legislators have perhaps over-calibrated on deceptive use of imagery depicting candidates," Lawson said.

Indeed, while the Biden robocall deepfake impersonated the president, the message wasn't designed to make Biden look bad by implicating him in a fake gaffe or scandal — rather, it was about discouraging voters from going out on election day. And this kind of risk isn't always covered by the new laws being introduced.

As we approach the general election this fall, we're entering a brave new world of potential threats to democracy. While the new laws are a good first step at addressing these threats, the election may uncover all the ways in which they fall short.

Related Topics