Days after Vice President Kamala Harris launched her presidential bid, a video — created with the help of artificial intelligence — went viral.
“I … became the Democratic presidential candidate because Joe Biden finally exposed his senility during the debate,” Harris’ voice-over said in a fake audio track used to replace one of his campaign ads. “I was chosen because I am the chief diversity employee.”
Billionaire Elon Musk — who has endorsed Harris’ Republican opponent, former President Trump — shared the video on X, then clarified two days later that it was actually intended as a parody. The original tweet had 136 million views. A follow-up calling the video a parody got 26 million views.
For Democrats, including California Governor Gavin Newsom, the incident is no laughing matter, prompting calls for more regulations to combat AI-generated videos with political messages and renewed debate about the appropriate role for government to try to contain the emerging technology.
On Friday, California lawmakers gave final approval to a bill that would ban the dissemination of campaign ads or “electioneering communications” for 120 days after an election. Assembly Bill 2839 targets manipulated content that would damage a candidate’s reputation or electoral prospects along with confidence in the election results. It’s intended to address videos like the one Musk shared from Harris, though it includes exceptions for parody and satire.
“We’re seeing California enter its first election in which disinformation powered by generative AI will destroy our information ecosystem like never before and millions of voters will no longer know what images, audio or videos to trust,” said Assemblywoman Gail Pellerin (D- Santa Cruz).”So we have to do something.”
Newsom has signaled that he will sign the bill, which would take effect immediately, in time for the November election.
The legislation updates a California law that prohibits people from distributing deceptive audio or visual media intended to damage a candidate’s reputation or deceive voters within 60 days of an election. State lawmakers say the law should be strengthened during an election cycle when people have been flooding social media with digitally altered videos and photos known as deepfakes.
The use of deepfakes to spread misinformation has worried lawmakers and regulators during previous election cycles. This fear has increased following the launch of new AI-powered tools, such as chatbots that can quickly generate images and videos. From fake robocalls to fake celebrity endorsements for candidates, AI-generated content is testing tech platforms and lawmakers alike.
Under AB 2839, candidates, election committees or election officials can request a court order to have deepfakes taken down. They can also sue the person who distributes or republishes the fraudulent material for damages.
The law also applies to deceptive media posted 60 days after the election, including content that depicts voting machines, ballots, voting sites or other election-related property in a way that could undermine confidence in the election results.
It does not apply to satire or parody labeled as such, or to a station’s broadcast if it informs the audience that what is depicted does not accurately represent a speech or event.
Tech industry groups oppose AB 2839, along with other bills targeting online platforms for not properly editing fraudulent election content or labeling AI-generated content.
“This will result in the loss and blocking of constitutionally protected free speech,” said Carl Szabo, vice president and general counsel for NetChoice. Members of the group include Google, X and Snap as well as the parent company of Facebook, Meta, and other tech giants.
Online platforms have their own rules about manipulated media and political ads, but policies can vary.
Unlike Meta and X, TikTok does not allow political ads and says it may remove AI-generated content with a label if it depicts public figures such as celebrities “if it is used for political or commercial endorsement.” Truth Social, the platform that Trump created, does not deal with manipulated media in the rules about what is not allowed on the platform.
Federal and state regulators have cracked down on AI-generated content.
The Federal Communications Commission in May proposed a $6 million fine against Steve Kramer, the Democratic political consultant behind the robocalls that used AI to imitate President Biden’s voice. Hoax calls discourage participation in New Hampshire’s Democratic presidential primary in January. Kramer, who told NBC News he planned the call to draw attention to the dangers of AI in politics, also faces criminal charges of felony voter suppression and impersonation of one of the candidates.
Szabo said the current law is enough to address concerns about election deepfakes. NetChoice has sued various states to end some laws aimed at protecting children on social media, arguing that they violate free speech protections under the 1st Amendment.
“Just making a new law doesn’t do anything to stop bad behavior, you have to enforce the law,” Szabo said.
More than two dozen states, including Washington, Arizona and Oregon, have enacted, passed or are working on legislation to regulate deepfakes, according to consumer advocacy nonprofit Public Citizen.
In 2019, California enacted a law aimed at combating manipulated media after a video of what appeared to be House Speaker Nancy Pelosi drunk went viral on social media. Enforcing the law has been a challenge.
“We have to water it down,” said Assemblyman Marc Berman (D-Menlo Park), who authored the bill. “It draws a lot of attention to the potential risks of this technology, but I worry that it really, at the end of the day, doesn’t do much.”
Instead of taking legal action, said Danielle Citron, a professor at the University of Virginia School of Law, political candidates may choose to eliminate deepfakes or even ignore them to limit their spread. By the time they make it through the court system, the content may have gone viral.
“The law is important because of the message it sends. It teaches us,” he said, adding that he told people who shared deepfakes that there was a cost.
This year, lawmakers are working with the California Initiative for Technology and Democracy, a nonprofit project of California Common Cause, on several bills to address political fraud.
Some target online platforms that are already protected under federal law so that they are not responsible for the content posted by their users.
Berman introduced a bill that would require online platforms with at least 1 million California users to remove or flag election-related content as fraudulent within 120 days of the election. The platform must act no later than 72 hours after the user reports the post. Under AB 2655, which passed the Legislature Wednesday, the platform also requires procedures to identify, remove and label fake content. It also does not apply to parody or satire or news that meets certain requirements.
Another bill, co-authored by Assemblymember Buffy Wicks (D-Oakland), would require online platforms to label AI-generated content. While NetChoice and TechNet, another industry group, opposed the bill, ChatGPT maker OpenAI supported AB 3211, Reuters reported.
However, the two bills won’t go into effect until after the election, underscoring the challenges of implementing new laws as technology advances rapidly.
“Part of my hope with introducing this bill is the attention it creates, and hopefully the pressure it creates on social media platforms to act now,” Berman said.