FEC says it doesn't have authority to regulate AI content in political ads
But commissioners are asking Congress for more jurisdiction over the issue.
The Federal Election Commission dead-locked on Thursday on moving forward on a petition to impose regulations on content in political ads that was generated by artificial intelligence.
The FEC voted 3-3 along party lines about whether to open up the matter for public comment, with two members, a Democrat and a Republican, expressing skepticism about their authority over AI.
"Our jurisdiction on this point is limited to instances where a campaign fraudulently misrepresents itself as acting on behalf of any other candidate or political party," Commissioner Allen Dickerson, a Republican, said at the FEC's public meeting.
"Now, with my full support, the commission has asked Congress to expand our jurisdiction," Dickerson said.
Commissioner Ellen Weintraub, who unsuccessfully voted with other Democrats to open the petition up for public comment, did not weigh in on the question of whether the FEC has jurisdiction but pointed out the value of soliciting outside feedback.
"Frequently, the comments that we get will address whether we have legal grounds to move forward on it, whether we have jurisdiction," she said. "These are all arguments that are frequently addressed in the comments, and I think it's important to engage with the public on these topics."
The meeting was the FEC's first public engagement on the issue, which is increasingly in the spotlight as AI is adopted in more and more parts of society -- and appearing more and more often in political content.
Candidates, their campaigns and groups that support them have already begun to use hyper-realistic photos and videos created by AI, also known as "deepfakes," in their advertising and other content for the public.
The petition that was considered Thursday had been filed by the nonprofit advocacy group Public Citizen. It argued that the FEC, which regulates campaign finance, should also regulate some misleading AI-generated content under its existing rules on "fraudulent misrepresentation" of a candidate.
Lawmakers in Washington have sought to address the issue. Democrats in both chambers of Congress recently introduced a bill that would require a disclaimer on political ads that feature AI-generated images or videos. Republicans have not yet signaled support or opposition to that proposal.
On Tuesday, President Joe Biden convened a group of technology leaders to discuss the risks and promises of the new technology, and Senate Majority Leader Chuck Schumer unveiled a framework for AI policy and governance.
Schumer highlighted the risks to politics and elections in a speech this week to the national security think tank Center for Strategic and International Studies.
"We could soon live in a world where political campaigns regularly deploy totally fabricated -- yet totally believable -- images and footage of Democratic or Republican candidates, distorting their statements and greatly harming their election chances," he said.
The proposed legislation in Congress would address only a sliver of the AI-generated content that could influence politics. Because it applies only to political advertisements, the bill would have little bearing on the spread of other information among social media users.
Political experts also see potential value in using AI beyond its capabilities to create fake content, such as for automatic canvassing of voters and for email communications.