The use of artificial intelligence in political campaigns raises red flags leading up to the 2024 election
We’re a year away from the presidential election, and political experts and some presidential candidates are already raising red flags about the use of AI technology in campaign ads and other avenues.
Last summer, a super PAC released an ad promoting Gov. Ron DeSantis’ campaign that used a generative AI tool that made it appear as if former President Donald Trump was voice-reading social media posts from his social media platform. The ad made no disclaimer about the synthetic voice.
DeSantis’ campaign also released an ad featuring AI-generated images of Trump and Dr. Anthony Fauci that also did not include a disclaimer.

There are currently no federal rules for campaigns when it comes to using AI-generated content in political materials such as ads.
“All campaigns can use this. In that sense, who sets the rules of the road as much as the campaigns themselves, as they go along?” Russell Wald, policy director at Stanford University’s Institute for Human-Centered Artificial Intelligence, asked ABC News Live.
He added that the use of this technology in campaigns could be concerning, not only because it could be used to spread misinformation among voters, but also because there are no rules in place to prevent its use.
Wald said the biggest problem posed by AI-generated campaign materials is that they promote the concept of a “liars’ dividend” where anyone can claim a fact or factual event is a lie, fake, and sow doubt.
“I think we may be in the last days where we have any confidence in the validity of what we see digitally,” he said.
Some major tech companies, such as Open AI, the company behind ChatGPT, have warned Congress about the dangers of AI technology in the political world.
“My worst fear is that we… the technology industry, are doing great harm to the world,” Sam Altman, CEO of Open AI, told the Senate Judiciary Committee in May. “I believe that if this technology goes wrong, it could To get completely wrong.”

Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and Law, May 16, 2023, in Washington, D.C.
Win McNamee/Getty Images
Some leaders in Washington, D.C., are already responding.
Last month, President Joe Biden signed an executive order aimed at addressing safety and security concerns related to artificial intelligence.
A bipartisan group of senators is working on AI regulation, and the Federal Election Commission is considering amending regulations already on the books to ban the use of “intentionally deceptive” AI in campaign ads.
During the campaign, some Republican presidential candidates, such as Vivek Ramaswamy and Asa Hutchinson, acknowledged the potential impact of artificial intelligence.
Rules that include mandatory disclaimers on campaign materials that use AI-generated content would go a long way, Kevin Liao, a senior director at political consulting group Bryson Gillette, who previously worked on the campaigns of President Joe Biden and Sen. Elizabeth Warren, told ABC News Live. .
“I think that will be an incredibly useful tool so that voters who see these ads can get a sense of what is real and what is not real,” he said.

Liao also warned that candidates need to think outside their election campaigns when it comes to AI technologies, as he noted that foreign adversaries have access to the same tools.
“We’ve seen in past election cycles how, for example, foreign actors have manipulated social media to access American voters’ feeds and feed them misleading information. AI can certainly be used in the same way,” he said. “And that’s certainly one of the concerns we should all have about technology heading into this election cycle.”
This article originally appeared on abcnews.go.com