California Governor Gavin Newsom on Friday vetoed a landmark bill aimed at establishing the first-in-the-state safety measures for large models of artificial intelligence.
The decision is a major blow to efforts to keep the housing industry growing rapidly with little oversight. bill will establish some regulations first in large-scale AI models in the country and pave the way for AI safety regulations across the country, advocates say.
Earlier this month, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California should take the lead in regulating AI in the absence of federal action, but that the proposal “could have a chilling effect on the industry.”
The proposal, which has drawn fierce opposition from startups, tech giants and some Democratic House members, could hurt homegrown industries by imposing rigid requirements, Newsom said.
“While well-intentioned, SB 1047 does not consider that AI systems are deployed in high-risk environments, including making critical decisions or using sensitive data,” Newsom said in a statement. “However, the bill imposes strict standards on the most basic functions – as long as large systems are widespread. I do not believe this is the best approach to protect society from the real threats posed by these technologies.”
Newsom, on Sunday, announced that the state will work with several industry experts, including AI pioneer Fei-Fei Li, to develop a fence for powerful AI models. Li opposed the AI ​​security proposal.
The measure, aimed at reducing the potential risks created by AI, will require companies to test their models and publicize safety protocols to prevent the models from being manipulated, for example, removing the country’s power grid or helping to create chemical weapons. Experts say that such a scenario is possible in the near future as the industry continues to develop at a rapid pace. It will also provide whistleblower protection for workers.
The law is one of several bills passed by the Legislature this year to regulate AI, warfare deepfakes and protect workers. State lawmakers say California must act this year, citing hard lessons learned from failing to regulate social media companies when they had the chance.
Proponents of the move, including Elon Musk and Anthropic, say the proposal could inject some level of transparency and accountability into large-scale AI models, as developers and experts say they still don’t understand how AI models work. why
Bill targets a system that would cost more than $100 million to build. No current AI model has reached that threshold, but some experts say that could change in the coming years.
“This is due to the large scale of investment in the industry,” said Daniel Kokotajlo, a former OpenAI researcher who resigned in April because he ignored the risks of AI. “It’s an insane amount of power to have a private company that you can’t control, and it’s also very dangerous.”
The United States is already behind Europe in regulating AI to limit risks. California’s proposal is not as comprehensive as Europe’s regulations, but it would be a good first step in setting up guardrails on the rapidly evolving technology that has raised concerns about job losses, misinformation, invasion of privacy and automated bias, supporters say.
Several leading AI companies have in the past year voluntarily agreed to follow the safeguards set by the White House, such as testing and sharing information about their models. The California bill would require AI developers to follow the same requirements as those commitments, the measure’s supporters said.
But critics, including former US House Speaker Nancy Pelosi, said the bill would “kill California technology” and stifle innovation. This would discourage AI developers from investing in large models or sharing open source software, he said.
Newsom’s decision to veto the bill marks another victory in California for big tech companies and AI developers, many of whom have spent the past year lobbying with the California Chamber of Commerce to pressure the governor and lawmakers from enacting AI regulations.
Two other sweeping AI proposals, which also faced opposition from the tech industry and others, died before a legislative deadline last month. The bill would require AI developers to label AI-generated content and prohibit discrimination from AI tools used to make employment decisions.