Jakub Porzycki Nurphoto Getty Images
OpenAI and Anthropic, two of the world’s richest artificial intelligence startups, have agreed to let the US AI Safety Institute test their new model before releasing it to the public, amid growing industry concerns about safety and ethics in AI.
The institute, located in the Department of Commerce at the National Institute of Standards and Technology (NIST), said in a press release that it will gain “access to major new models from each company before and after public release.”
The group was established after the Biden-Harris administration issued the US government’s first executive order on artificial intelligence in October 2023, which requires an assessment of security, equity and civil rights guidance and research on the impact of AI on the labor market.
“We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models,” OpenAI CEO Sam Altman wrote in a post on X. OpenAI also confirmed to CNBC on Thursday that, last year. , the company has doubled the number of weekly active users to 200 million. Axios first reported the numbers.
The news comes a day after reports that OpenAI is in talks to raise a funding round valuing the company at more than $100 billion. Thrive Capital led the round and will invest $1 billion, according to a source familiar with the matter who asked not to be named because the details are confidential.
Anthropic, founded by research executives and former OpenAI employees, was recently valued at $18.4 billion. Anthropotic figures Amazon as a leading investor, while OpenAI is heavily supported by Microsoft.
The agreement between the government, OpenAI and Anthropic “will enable collaborative research on how to assess capabilities and security risks, as well as ways to mitigate those risks,” according to a Thursday release.
Jason Kwon, chief strategy officer of OpenAI, told CNBC in a statement that, “We strongly support the mission of the US AI Safety Institute and look forward to working together to inform safety best practices and standards for AI models.”
Jack Clark, the founder of Anthropic, said that the company “collaborated with the US AI Security Institute to use their extensive expertise to test our models before deployment” and “strengthen our ability to identify and mitigate risks, promoting responsible AI development.”
Some AI developers and researchers have expressed concerns about safety and ethics in the increasingly for-profit AI industry. Current and former OpenAI employees published an open letter on June 4, outlining potential problems with rapid advances in AI and a lack of oversight and whistleblower protection.
“AI companies have strong financial incentives to avoid effective oversight, and we do not believe that corporate governance structures are sufficient to change this,” he wrote. AI companies, he added, “currently have little obligation to share some of this information with governments, and none with civil society,” and cannot “be relied upon to share it voluntarily.”
The day after this letter was published, sources familiar with the matter confirmed to CNBC that the FTC and the Department of Justice are set to open antitrust investigations into OpenAI, Microsoft and Nvidia. FTC Chairwoman Lina Khan described the agency’s action as a “market investigation of investments and partnerships formed between AI developers and major cloud service providers.”
On Wednesday, California lawmakers passed a hot-button AI security bill, sending it to Governor Gavin Newsom’s desk. Newsom, a Democrat, will decide whether to veto the legislation or sign it into law on September 30. technology companies for the potential to slow innovation.
WATCH: Google, OpenAI and others oppose California’s AI safety bill