A top Chinese research institution linked to the People’s Liberation Army has used the publicly available Llama Meta model to develop AI tools for potential military applications, according to academic papers and analysts.
A top Chinese research institution linked to the People’s Liberation Army has used the publicly available Llama Meta model to develop AI tools for potential military applications, according to academic papers and analysts.
In a June paper seen by Reuters, six Chinese researchers from three institutions, including two at the leading research body of the People’s Liberation Army (PLA), the Academy of Military Sciences (AMS), detailed how they used an early version of Meta Llama as the basis for what they called ” Chatbit”.
The researchers used the Llama 2 13B large language model (LLM) released by Meta META.O in February 2023, combining their own parameters to build a military-focused AI tool to collect and process intelligence, and provide accurate and reliable information for operational decisions. – make
ChatBIT has been tuned and “optimized for dialogue tasks and answering questions in the military field”, the paper said. It outperforms some other AI models that are about 90% as good as the powerful ChatGPT-4 OpenAI. The researchers did not elaborate on how they determined the performance or determined whether the AI ​​model had been developed.
“This is the first time there is substantial evidence that PLA military experts in China have systematically researched and tried to use the power of open source LLM, especially Meta, for military purposes,” said Sunny Cheung, an associate fellow at the Jamestown Foundation who specializes in emerging Chinese technology and dual use including AI.
Meta has received an open release of many AI models, including Llama. This imposes restrictions on use, including the requirement that services with more than 700 million users request a license from the company.
The provision also prohibits the use of the model for “industrial or military applications, war, nuclear, espionage” and other activities subject to US defense export controls, as well as for the development of weapons and content intended to “incite and promote violence”.
However, because the Meta model is generic, companies have limited ways to enforce these provisions.
In response to Reuters questions, Meta cited its acceptable use policy and said it takes steps to prevent abuse.
“Any use of our model by the People’s Liberation Army is illegal and contrary to our acceptable use policy,” Molly Montgomery, Meta’s director of public policy, told Reuters in a telephone interview.
Chinese researchers include Geng Guotong and Li Weiwei with the AMS Military Science Information Research Center and the National Institute of Defense Innovation Technology, as well as researchers from the Beijing Institute of Technology and Minzu University.
“In the future, through technological refinement, ChatBIT will not only be applied to intelligence analysis, but also … strategic planning, simulation exercises and command decision-making will be explored,” the newspaper said.
China’s Defense Ministry did not respond to a request for comment, nor did any institute or researcher.
Reuters was unable to confirm the capabilities and computing power of ChatBIT, although researchers noted that the model incorporated only 100,000 military dialogue recordings, a relatively small number compared to other LLMs.
“This is a drop in the ocean compared to these models (which) are trained with trillions of tokens so … it really makes me question what we can do here in terms of different capabilities,” said Joelle Pineau, vice president of AI Research at Meta and a professor of computer science at McGill University in Canada.
The research comes amid a heated debate in US national security and technology circles about whether companies such as Meta should make their models publicly available.
US President Joe Biden in October 2023 signed an executive order that wants to regulate the development of AI, noting that despite the benefits of innovation,” there are also “many security risks, such as the removal of protection in the model”.
This week, Washington said it was finalizing rules to block US investment in artificial intelligence and other technology sectors in China that could threaten national security.
Pentagon spokesman John Supple said the Department of Defense recognizes that the open source model has advantages and disadvantages, and “we will continue to monitor and assess the capabilities of competitors”.
Some observers say China’s steps to develop indigenous AI, including setting up research labs, have made it difficult for the country to narrow its technology gap with the United States.
In a separate academic paper reviewed by Reuters, two researchers with the Aviation Industry Corporation of China (AVIC) – which the United States has defined as a company with ties to the PLA – described the use of Llama 2 for “airborne electronic warfare interference strategy training. “.
China’s use of Western-developed AI has also been domestic security. The June paper describes how Llama is used for “intelligence policing” to process large amounts of data and improve police decisions.
The state-run PLA Daily published comments in April on how AI could help “accelerate the research and development of weapons and equipment”, help develop combat simulations and improve the efficiency of military training”.
“Can you keep (China) out of the cookie jar? No, I don’t see how you can,” William Hannas, lead analyst at Georgetown University’s Center for Security and Emerging Technologies (CSET), told Reuters. Papers 2023 by CSET found 370 Chinese institutions whose researchers have published papers related to General Artificial Intelligence – helping to drive China’s national strategy to lead the world in AI by 2030.
“There is too much collaboration between the best scientists in China and the best AI scientists in the US to exclude them from development,” Hannas added.