@NCCapitol

'Are we ready?': Elections experts ring alarm bells over AI, deepfakes as 2024 races heat up

The Department of Homeland Security's cybersecurity agency sent warnings and tips to elections officials in North Carolina and elsewhere on how to fight schemes using artificial intelligence to spread misinformation.
Posted 2024-03-22T18:43:09+00:00 - Updated 2024-03-24T15:21:23+00:00
Experts warn of artificial intelligence threats to 2024 election

Another sign of the rising role of misinformation in politics: The federal government is issuing guidance to states on how to combat high-tech efforts to trick voters as the 2024 elections heat up.

Earlier this year the Department of Homeland Security’s cybersecurity agency sent warnings and tips to elections officials in North Carolina and elsewhere on how to fight schemes using artificial intelligence to spread misinformation about specific candidates or even the election itself.

Here in North Carolina, the efforts to prepare for such tricks are underway.

“The State Board of Elections has been and continues to be concerned about false information about elections on the Internet and social media,” elections board spokesman Pat Gannon said. “The emergence of AI-driven false information in elections exacerbates these concerns.”

The 2022 elections saw the use of deepfakes — computer-generated images and videos that use AI to trick viewers into believing they’re real — by several political campaigns in North Carolina and nationally, a trend that’s continuing this year.

The Republican National Committee, for instance, launched an anti-immigration ad last year that it described as “an AI-generated look into the country's possible future if Joe Biden is re-elected in 2024.”

Republican Florida Gov. Ron DeSantis, meanwhile, put out an ad with AI-generated images falsely showing his top GOP primary rival, former President Donald Trump, hugging Dr. Anthony Fauci, purporting that Trump supported restrictions that were unpopular with conservatives during the Covid-19 pandemic. DeSantis has since ended his 2024 presidential campaign and endorsed Trump.

Closer to home, in a 2022 GOP congressional primary the eventual winner of that race, Bo Hines, was targeted by an artificially generated ad. It took a Hines comment that was printed in a newspaper and made a recording pretending that Hines was speaking those words.

In 2024 North Carolina saw its first major AI-backed operation start up toward the end of the primary elections: A Democratic donor funded a website that used AI to fake the voice of Republican Lt. Gov. Mark Robinson — the GOP’s nominee for governor — reading quotes that ventured into the absurd.

Todd Stiefel, the man behind that anti-Robinson effort, defended it as political comedy and noted that the website went out of its way to make sure people knew it was meant as a joke. There’s a clear ethical difference, he told WRAL when the website launched, between using AI for satire or using it to try to actually trick voters.

Robinson’s Democratic opponent wasn’t laughing. Josh Stein, the attorney general and Democratic nominee for governor, condemned the website in a statement from his political campaign. A spokesperson at the North Carolina Department of Justice also had previously told WRAL the department is watching closely for reports of AI being misused in the election.

“Attorney General Stein is deeply concerned about the use of AI to deceive and harm North Carolinians,” spokeswoman Nazneen Ahmed said. “He’s pushed the federal government to look at this issue closely as it relates to child sexual abuse, and our office is taking a close look at other ways the technology can be exploited.”

It’s not just North Carolina. Earlier this year authorities in New Hampshire announced a criminal investigation into a robocall that faked President Joe Biden’s voice to spread election misinformation among local voters. The North Carolina State Board of Elections didn’t receive reports of such issues leading up to the 2024 primary, Gannon said, but is prepared to investigate should anything occur before the Nov. 5 general election.

Opponents of government regulations that would crack down on AI cite concerns over freedom of speech issues. But some companies have begun putting their own internal regulations in place. Google, for instance, announced last year that its YouTube unit would require users to disclose whether their videos feature synthetic content. And YouTube has pledged to take down any videos that could be used for voter suppression or election misinformation. Meta, the parent company of Facebook and Instagram, adopted a policy last year that will label any political ads created with AI.

'Science fiction stuff'

While federal and state elections experts raise the alarm, it remains to be seen how much urgency North Carolina political leaders, other than Stein, will assign to concerns over the use of AI in the elections.

The fact that no new laws have been passed cracking down on AI in North Carolina ahead of the 2024 general election worries advocates like Josh Lawson. The former top lawyer for the state elections board, Lawson is now the director of “AI and democracy” at the Aspen Institute, a national think tank. But even while states should be beefing up their laws, he said, it’ll also take a broader societal effort to raise awareness of how to spot AI-generated misinformation, and where to find real information.

“You have to proceed with two things that you're wrapping your arms around,” Lawson said in an interview. “One, that it is helpful for the democratic process to establish what the laws are in using this very, very powerful technology at the intersection of AI and elections. And two, to prepare for a world where certain bad actors don't care that it's illegal.”

The state legislature returns to Raleigh next month for what’s likely to be a brief session focused mostly on making tweaks to the two-year state budget passed last year. When asked last month about concerns over AI influencing the elections — and whether any changes need to be made during that session, ahead of the general election — House Speaker Tim Moore said in an interview that he didn’t see the need for new laws.

It’s deeply concerning, Moore said, but he believes existing anti-defamation laws should be enough to fight back.

“Once you get their voice, once you get their image — just off of television or whatever — you can go in and reconstruct it, make it look like they said anything or did anything,” Moore said. “There’s certainly a lot of danger in that. But I would submit to you that under our current laws, if someone were to engage in that kind of activity … it probably goes along with slander and libel.”

Defamation laws could help candidates who are targeted by AI; state law also makes it a crime already to spread false information about elections.

Moore and other legislative Republicans already passed numerous changes to election laws for 2024. They banned elections officials from receiving outside grants to help bolster their budgets, they loosened the rules governing partisan poll observers, they moved up the deadline for mail-in ballots and they changed the way results are reported on election night.

Having made so many other election law changes during last year’s session — but not touching AI — indicates it’s not a priority.

However, Moore didn’t entirely shut the door on the possibility of the legislature taking some sort of last-minute action against AI ahead of the November elections.

“If we need to look at it, we certainly will,” he said. “This is brand-new stuff, right? Just a few years ago this would’ve been science fiction stuff — A Schwarzenegger futuristic movie or something. And now, here it is.”

'Are we ready for it?'

Lawson, who lives in Raleigh, said he’s helped other states write new laws preparing for AI in the elections and he’ll be watching how North Carolina lawmakers proceed.

“Are we ready for it? It depends,” he said. “This is a live game. And the people playing it have decisions that they're going to be making in the next couple months. There are things that we can do, that will continue to make a secure process even more secure.”

The federal government hasn’t passed any laws targeting election misinformation fueled by AI, although the Federal Communications Commission issued an agency rule cracking down on AI-assisted robocalls earlier this year, after the fake Biden voice made national news.

Similarly, many states have enacted laws that prohibit the use of deepfakes against private citizens. North Carolina is one of them. But only a handful — California, Minnesota, Texas, Washington and Wisconsin — have provisions specifically related to election campaigns, according to the National Conference of State Legislatures, which tracks laws passed in each state.

Regardless of whether North Carolina passes any new laws regulating AI in elections, Gannon and Lawson both said it’s important for regular people to be sure they’re getting their news about candidates or the election itself from trusted, established sources.

For people who want to check if a video or audio clip is fake, there are tools online that are advertised as being able to catch AI-generated content. Fact-checking service PolitiFact, a partner of WRAL, recently looked into those and found that while those tools work some of the time, they’re not fully accurate.

'Risks to election infrastructure'

Elections officials themselves are also trying to become more nimble and proactive. Gannon said the elections board recently ran an exercise with election workers, law enforcement, information-technology security staffers and others on a variety of election-related issues — and that elections staff meet routinely with law enforcement to share information about the elections and any real or potential threats.

“If we identify deepfakes or other misinformation efforts that might violate election laws, the State Board of Elections has an investigations division that can review the situation and pursue a criminal case, if warranted,” Gannon said. “Our local, state and federal law enforcement partners may also become involved, depending on the situation.”

The documents the federal government recently sent the state elections board enumerate the ways AI could be used to manipulate voters in the elections. The trickery, according to the Cybersecurity and Infrastructure Security Agency, or CISA, could come from domestic sources or foreign governments.

  • Deepfake videos making it appear that politicians, news anchors or others have said things they never actually did.
  • Real videos or images that have been digitally altered to show something new and false.
  • Phone calls using AI-generated voices impersonating politicians or celebrities, such as the fake Biden call in New Hampshire.
  • Impersonation of election officials to gain access to sensitive information or systems.
  • The use of robocalls to inundate help centers with fake calls, flooding out real voters trying to call with questions.

“AI capabilities present opportunities for increased productivity, potentially enhancing both election security and election administration,” the CISA report said. “However, these capabilities also carry the increased potential for greater harm, as malicious actors, including foreign nation state actors and cybercriminals, could leverage these same capabilities for nefarious purposes. For the 2024 election cycle, generative AI capabilities will likely not introduce new risks, but they may amplify existing risks to election infrastructure.”

WRAL State Government Reporter Paul Specht contributed reporting.

Credits