Skip to main content
Markkula Center for Applied Ethics

How to Avoid AI-Enhanced Attempts to Manipulate the Election

A phone with several AI apps installed. Photo by Solen Feyissa/Pexels.

A phone with several AI apps installed. Photo by Solen Feyissa/Pexels.

Ann Skeet and John Pelissero

Photo by Solen Feyissa/Pexels.

Ann Skeet is senior director, leadership ethics (@leaderethics), and John Pelissero (@1pel) is director, government ethics, both with the Markkula Center for Applied Ethics at Santa Clara University. They are among the co-authors of “Voting for Ethics: A Guide for U.S. Voters,” from which portions of this piece were adapted. Views are their own.

This article originally appeared in the Los Angeles Times.

 

The headlines this election cycle have been dominated by unprecedented events, among them Donald Trump’s criminal conviction, the attempt on his life, Joe Biden’s disastrous debate performance and his replacement on the Democratic ticket by Vice President Kamala Harris. It’s no wonder other important political developments have been drowned out, including the steady drip of artificial intelligence-enhanced attempts to influence voters.

During the presidential primaries, a fake Biden robocall urged New Hampshire voters to wait until November to cast their votes. In July, Elon Musk shared a video that included a voice mimicking Kamala Harris’ saying things she did not say. Originally labeled as a parody, the clip readily morphed to an unlabeled post on X with more than 130 million views, highlighting the challenge voters are facing.

More recently, Trump weaponized concerns about AI by falsely claiming that a photo of a Harris rally was generated by AI, suggesting the crowd wasn’t real. And a deepfake photo of the attempted assassination of the former president altered the faces of Secret Service agents so they appear to be smiling, promoting the false theory that the shooting was staged.

Clearly, when it comes to AI manipulation, the voting public has to be ready for anything.

Voters wouldn’t be in this predicament if candidates had clear policies on the use of AI in their campaigns. Written guidelines about when and how campaigns intend to use AI would allow people to compare candidates’ use of the technology to their stated policies. This would help voters assess whether candidates practice what they preach. If a politician lobbies for watermarking AI so that people can identify when it is being used, for example, they should be using such labeling on their own AI in ads and other campaign materials.

AI policy statements can also help people protect themselves from bad actors trying to manipulate their votes. And a lack of trustworthy means for assessing the use of AI undermines the value the technology could bring to elections if deployed properly, fairly and with full transparency.

It’s not as if politicians aren’t using AI. Indeed, companies such as Google and Microsoft have acknowledged that they have trained dozens of campaigns and political groups on using generative AI tools.

Major technology firms released a set of principles earlier this year guiding the use of AI in elections. They also promised to develop technology to detect and label realistic content created with generative AI and educate the public about its use. However, these commitments lack any means of enforcement.

Government regulators have responded to concerns about AI’s effect on elections. In February, following the rogue New Hampshire robocall, the Federal Communications Commission moved to make such tactics illegal. The consultant who masterminded the call was fined $6 million, and the telecommunications company that placed the calls was fined $2 million. But even though the FCC wants to require that use of AI in broadcast ads be disclosed, the Federal Election Commission’s chair announced last month that the agency was ending its consideration of regulating AI in political ads. FEC officials said that would exceed their authority and that they would await direction from Congress on the issue.

California and other states require disclaimers when the technology is used, but only when there is an attempt at malice. Michigan and Washington require disclosure on any use of AI. And Minnesota, Georgia, Texas and Indiana have passed bans on using AI in political ads altogether.

It’s likely too late in this election cycle to expect campaigns to start disclosing their AI practices. So the onus lies with voters to remain vigilant about AI — in much the same way that other technologies, such as self-checkout in grocery and other stores, have transferred responsibility to consumers.

Voters can’t rely on the election information that comes to their mailboxes, inboxes and social media platforms to be free of technological manipulation. They need to take note of who has funded the distribution of such materials and look for obvious signs of AI use in images, such as missing fingers or mismatched earrings. Voters should know the source of information they are consuming, how it was vetted and how it is being shared. All of this will contribute to more information literacy, which, along with critical thinking, is a skill voters will need to fill out their ballots this fall.


Sep 19, 2024
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: