, , , , ,

AI-generated campaign ads would receive a content label in Wisconsin under bipartisan proposal

Creators of audio or video would have to disclose if they used artificial intelligence, as experts warn 2024 could be the "AI Election"

A deepfake video of President Obama
This image made from video of a fake video featuring former President Barack Obama shows elements of facial mapping used in new technology that lets anyone make videos of real people appearing to say things they’ve never said. AP Photo

A bipartisan group of Wisconsin lawmakers is circulating a bill that would require candidates and political groups to tell the public when they use artificial intelligence to generate audio or video in their ads.

The legislation resembles proposals in other states and at the federal level. It comes as some political experts say the 2024 election cycle may be marked by AI-generated content and influence campaigns.

As proposed, the bill would require any audio or video campaign materials — whether produced by a candidate, PAC or other campaign entity — to disclose at the start and end of an ad if they used “synthetic media,” or media that is “substantially produced by means of generative artificial intelligence.”

Stay informed on the latest news

Sign up for WPR’s email newsletter.

This field is for validation purposes and should be left unchanged.

Violators of the rule would be subject to a fine of up to $1,000 for each violation.

“We’re not looking at this to score political points,” said Rep. Adam Neylon, R-Pewaukee, a co-author of the bill. “But we’re looking at this to help maintain the integrity in our election system and make sure that what people see and hear when it relates to elections is real and verifiable.”

Neylon is planning to introduce the plan alongside Sens. Romaine Quinn, R-Cameron, and Mark Spreitzer, D-Beloit, and Rep. Clinton Anderson, D-Beloit. The bill is currently being circulated for cosponsorship at the state Capitol.

Cheap and straightforward AI generators have proliferated in just a few years, making it easy to create fake videos of things that never happened, or audio of a candidate saying things they didn’t say.

“All you need is somebody who’s unscrupulous in politics — can you imagine?” said Ryan Calo, a professor of law and Information Science at the University of Washington.

“So you just need a person who’s willing to take totally off-the-shelf, easy-to-use tools, and depict a politician, a candidate, doing or saying something that they never did,” Calo said.

For example, a recent online video from the presidential campaign of Florida Gov. Ron DeSantis used fake images of former President Donald Trump hugging Dr. Anthony Fauci, the infectious disease expert who led the national response to COVID-19 and became a demonized figure in some right-wing circles. Those images were AI-generated.

The proliferation of such false images and audio have a two-fold risk, Calo said. AI-generated campaign content makes it easier to spread misinformation about a candidate, and also undermines voters’ general ability to believe their own eyes and ears.

“In addition to allowing you to pretend that something is so that isn’t, and to fool people into believing something through technology, you also have a way to plausibly deny something that actually did happen,” he said.

The Wisconsin legislators say their bill is one step toward mitigating the role of artificial intelligence in promoting political misinformation.

“I think we have folks on both sides of the aisle that are worried about misinformation and campaigns,” said Anderson, the Assembly Democrat. “And when you have something that could be almost impossible to tell the difference between real and fake, it’s good that we get a grasp on it.”

Similar legislation has passed or is being proposed in other states including Minnesota, Michigan and Washington.

About three-quarters of Americans are “very” or “somewhat” concerned about how AI-generated content can be used to spread political propaganda, according to polling by YouGov.

An emerging problem in the 2024 election cycle

AI sound and image-generating programs have become more widely available, and more sophisticated, in recent years.

That’s compelled lawmakers across the spectrum, and at both the state and federal levels, to take action.

“It’s a real threat. It really does compromise the environment,” said Calo.

Earlier this month, U.S. Sen. Amy Klobuchar, D-Minn., and U.S. Rep. Yvette Clarke, D-N.Y., wrote to the CEOs of Meta and X to inquire about how the platforms plan to address the use of AI-generated ads on social media. Meta is the parent company of Facebook and Instagram, while X is the new name for the social media site still commonly known as Twitter.

“With the 2024 elections quickly approaching, a lack of transparency about this type of content in political ads could lead to a dangerous deluge of election-related misinformation and disinformation across your platforms – where voters often turn to learn about candidates and issues,” they wrote.

Federal legislation has also been introduced to try to regulate how the content is used. Rep. Clarke introduced a bill in the U.S. House to require disclaimers like those in the Wisconsin proposal.

And a bipartisan group of senators — including Klobuchar and Sen. Josh Hawley, R-Mo. — have introduced federal legislation that would prohibit the “distribution of materially deceptive AI-generated audio or visual media” about national candidates.

The Federal Election Commission has also begun weighing how to regulate faked imagery and audio in campaigning.

But lawmakers have to balance campaigns’ free speech rights — which include a candidate’s right to lie — against the risks of fabricating words and deeds from whole cloth, said Calo.

In that sense, different states’ attempts to regulate the environment could provide a real-time legislative experiment going into the 2024 election cycle.

“It makes sense for the states to do this at the individual level because it’s a laboratory of ideas,” he said.