A controversial bill that seeks to protect Californians from artificial intelligence-driven disasters has caused an uproar in the tech industry. This week, the legislation passed the main committee but with amendments to make it more palatable to Silicon Valley.
SB 1047, from state Sen. Scott Wiener (D-San Francisco), is set to move to the floor of the state Assembly later this month. If it passes the Legislature, Governor Gavin Newsom will have to decide whether to sign or veto the groundbreaking legislation.
Supporters of the bill say it would create a fence to prevent rapidly advancing AI models from causing dangerous incidents, such as shutting down the power grid without warning. They worry that technology is developing faster than human creators can control.
Lawmakers aim to incentivize developers to handle the technology responsibly and empower state attorneys general to impose penalties in the event of threats or harm. The law also requires developers to be able to shut down controlled AI models immediately if problems arise.
But some tech companies, such as Facebook owner Meta Platforms, and politicians including influential U.S. Rep. Ro Khanna (D-Fremont), say the bill will stifle innovation. Some critics say it focuses on apocalyptic, far-fetched scenarios rather than more immediate concerns such as privacy and misinformation, although other bills address those issues.
SB 1047 is one of about 50 AI-related bills that have been brought up in the state Legislature, due to concerns about the technology’s effects on jobs, disinformation and public safety. As politicians work to create new laws to hedge against the fast-growing industry, some companies and talent are suing AI companies in the hope that the courts can set the ground rules.
Wiener, who represents San Francisco — home of AI startups OpenAI and Anthropic — has been at the center of the debate.
On Thursday, he made significant changes to his bill that some believe undermine the legislation while making it more likely that the Assembly will pass it.
The amendment removes the perjury penalty from the bill and changes the legal standard for developers regarding the safety of advanced AI models.
Additionally, plans to create a new government entity, to be called the Frontier Model Division, are no longer in place. In the original text, the bill required developers to submit security measures to newly created divisions. In the new version, the developer will submit these security measures to the attorney general.
“I think some of these changes might work,” said Christian Grose, a USC professor of political science and public policy.
Several tech players supported the bill, including the Center for AI Security and Geoffrey Hinton, who is considered the “godfather of AI.” Still, others worry it could hurt California’s booming industry.
Eight members of the California House – Khanna, Zoe Lofgren (D-San Jose), Anna G. Eshoo (D-Menlo Park), Scott Peters (D-San Diego), Tony Cárdenas (D-Pacoima), Ami Bera (D-Elk Grove), Nanette Diaz Barragan (D-San Pedro) and Lou Correa (D-Santa Ana) – wrote a letter to Newsom on Monday encouraging him to veto the bill if it passes the state Assembly.
“(Wiener) was really pressed in San Francisco among people who are experts in this area, who have told him and others in California that AI can be dangerous if we don’t manage it and then the people who pay, the best. research, from AI,” Grose said. “This could be a real flash point for him, pro and con, for his career.”
Some tech giants say they are open to regulation but disagree with Wiener’s approach.
“We agree with the way (Wiener) describes the bill and the goals it has, but we remain concerned about the impact of the bill on AI innovation, especially in California, and especially on open source innovation,” Kevin McKinley, Meta’s state privacy manager, said at the meeting with members of the LA Times editorial board last week.
Meta is one company with a collection of open source AI models called Llama, which allows developers to build on top of it for their own products. Meta released Llama 3 in April and it already has 20 million downloads, the tech giant said.
Meta declined to discuss the new amendment. Last week, McKinley said SB 1047 “is actually a very tough bill to redline and fix.”
A spokesman for Newsom said his office does not normally comment on pending legislation.
“The governor will evaluate this bill on its merits once it reaches his desk,” spokeswoman Izzy Gardon said in an email.
San Francisco AI startup Anthropic, known for its AI assistant Claude, has signaled it could support the bill if amended. In a July 23 letter to Assemblywoman Buffy Wicks (D-Oakland), Anthropic’s state and local policy leader, Hank Dempsey, proposed changes including changing the bill to focus on companies responsible for causing disasters rather than pre-harm enforcement.
Wiener said the amendment addresses Anthropic’s concerns.
“We can advance innovation and safety,” Wiener said in a statement. “The two are not exclusive.”
It’s unclear whether the amendment will change Anthropic’s position on the bill. On Thursday, Anthropic said in a statement that it would review “billing language as it becomes available.”
Russell Wald, deputy director at Stanford University’s HAI, which aims to advance AI research and policy, said he still opposes the bill.
“The new amendment appears to be more about optics than substance,” Wald said in a statement. “It seems less controversial to please some of the leading AI companies but it doesn’t solve the real problems of academic institutions and the open source community.”
It’s a fine balance for lawmakers trying to weigh concerns about AI while also supporting the country’s tech sector.
“What many of us are dealing with is a regulatory environment that allows some guardrails to exist while not hindering the innovation and economic growth that comes with AI,” Wicks said after Thursday’s committee meeting.
Times staff writer Anabel Sosa contributed to this report.