In a concerning development, Chinese researchers with ties to the military have created artificial intelligence systems based on Meta's open-source Llama model. According to recent reports, these AI models are explicitly designed for military applications.
A team of six researchers from three institutions linked to the People's Liberation Army (PLA) published a paper in June detailing their work. They utilized Llama 13B, an early version of Meta's large language model, and trained it on military data. The resulting system, dubbed ChatBIT, is intended for intelligence gathering, processing, and decision-making support.
The researchers expressed plans to expand ChatBIT's capabilities, stating it could be used for "strategic planning, simulation training, and command decision-making" in the future. The model was trained using 100,000 military dialogue records.
Additional papers reveal similar AI developments:
- A Llama-based model has been deployed for domestic policing, assisting law enforcement with data analysis and decision-making.
- Researchers at a PLA-affiliated aviation firm are using Llama 2 for "training of airborne electronic warfare interference strategies."
These developments come in the wake of Meta CEO Mark Zuckerberg's push for open-source AI, which he argued would make the world "more prosperous and safer." However, Meta's acceptable use policy explicitly prohibits military applications and use by entities like the PLA.
Meta's director of public policy, Molly Montgomery, stated that any use of their models by the PLA is "unauthorized and contrary to our acceptable use policy." However, the open-source nature of Llama means Meta has limited control over its use once released.
This situation highlights the complex challenges surrounding open-source AI development and its potential for unintended military applications.