Russia is the most prolific foreign influence actor using artificial intelligence to produce content targeting the 2024 presidential election, US intelligence officials said on Monday.
Cutting-edge technology has made it easier for Russia and Iran to create often polarizing content quickly and more convincingly to scare American voters, an official from the Office of the Director of National Intelligence, who spoke on condition of anonymity, told reporters at a briefing.
“The public (intelligence community) sees AI as an acceleration of malign influence, not yet a tool of revolutionary influence,” the official said. “In other words, information operations are the threat, and AI is the enabler.”
Intelligence officials have previously said they have seen AI used in elections abroad. “Our update today makes it clear that this is now happening here,” an ODNI official said.
Russian influence operations have spread synthetic images, videos, audio and text online, officials said. This includes AI-generated content “from and about prominent US figures” and material that seeks to emphasize divisive issues such as immigration. Officials said it was consistent with the Kremlin’s broader goal of boosting former President Donald Trump and undermining Vice President Kamala Harris.
But Russia also uses lower-tech methods. ODNI officials say a Russian influence actor made a video in which a woman claimed to be the victim of a hit-and-run by Harris in 2011. There is no evidence that happened. Last week, Microsoft also said Russia was behind the video, which was distributed by a website claiming to be a non-existent local San Francisco TV station.
Russia was also behind the video manipulation of Harris’ speech, ODNI officials said. It may have been edited using editing tools or with AI. They spread it on social media and use other methods.
“One of the efforts made by Russian influential actors is, when creating this media, to try to encourage its spread,” the ODNI official said.
The official said Harris’ video had been altered in various ways, to “paint him in a bad way, but also compare him to the enemy” and focus on issues Russia believed to be divisive.
Iran is also tapping AI to generate social media posts and write fake stories for websites posing as legitimate news outlets, officials said. The intelligence community says Iran is seeking to undermine Trump in the 2024 election.
Iran has used AI to create its content in English and Spanish, and is targeting Americans “across the political spectrum on polarizing issues” including the war in Gaza and presidential candidates, officials said.
China, the third major foreign threat to US elections, is using AI in a broader influence operation aimed at shaping global views of China and amplifying divisive US topics such as drug use, immigration and abortion, officials said.
However, officials said they have not identified any AI-powered operation targeting the results of the vote in the U.S. The intelligence community has said Beijing’s influence operation is more focused on the down vote race in the U.S. than the presidential contest.
US officials, lawmakers, tech companies, and researchers are worried about the potential for AI-powered manipulation to boost this year’s election campaign, such as fake videos or audio depicting candidates doing or saying things they didn’t do or misleading voters about voting. process.
While the threat can still occur when election day is near, so far AI has been used more often in various ways: by foreign enemies to increase productivity and boost volume, and by political partisans to generate memes and jokes.
On Monday, ODNI officials said that foreign actors have been slow to overcome three main obstacles to the content of AI-generated to be a greater risk for American elections: first, to overcome the guardrails built into many AI tools without being detected; second, develop their own advanced models; and third, the strategic targeting and deployment of AI content.
As Election Day nears, the intelligence community will be monitoring foreign efforts to introduce deceptive or AI-generated content in a variety of ways, including “laundering material through prominent figures,” using fake social media accounts or websites masquerading as news outlets. , or “leaking AI-generated content that appears sensitive or controversial,” the ODNI report said.
Earlier this month, the Justice Department accused Russian state broadcaster RT, which the US government says is an arm of Russian intelligence services, of embezzling nearly $10 million for pro-Trump American influencers who posted videos critical of Harris and Ukraine. The influence said it did not know the money came from Russia.