Meta will make its generative artificial intelligence (AI) models available to the US government, the tech giant has announced, in a controversial move that poses a moral dilemma for anyone using the software.
Meta revealed last week that it would make the models, known as Llama, available to government agencies, “including those working on defense and national security applications, and private sector partners supporting their work.”
The decision appears to conflict with Meta’s own policy, which lists a range of prohibited uses for llamas, including “(m)ilitary uses, warfare, nuclear industries or applications,” as well as espionage, terrorism, human trafficking and exploitation or harm to children.
Meta’s exception reportedly also applies to similar national security agencies in the United Kingdom, Canada, Australia and New Zealand. It came just three days after Reuters revealed that China has reworked Llama for its own military purposes.
The situation highlights the increasing vulnerability of open source AI software. It also means that users of Facebook, Instagram, WhatsApp and Messenger – some versions of which use Llama – could inadvertently contribute to military programs around the world.
What is llama?
Llama is a collection of large language models – similar to ChatGPT – and large multimodal models that handle data other than text, such as audio and images.
Meta, Facebook’s parent company, released Llama in response to OpenAI’s ChatGPT. The main difference between the two is that all Llama models are marketed as open source and free to use. This means that anyone can download the source code of a Lama model and run and modify it themselves (if he or she has the right hardware). On the other hand, ChatGPT can only be accessed through OpenAI.
The Open Source Initiative, an authority that defines open source software, recently released a standard outlining what open source AI should entail. The standard outlines “four freedoms” that an AI model must grant to be classified as open source:
- usage the system for any purpose and without having to ask for permission
- study how the system works and inspect its components
- to process the system for any purpose, including modifying its output
- part use the system for others, with or without modifications, for any purpose.
Meta’s Llama does not meet these requirements. This is due to restrictions on commercial use, the prohibited activities that could be considered harmful or illegal, and a lack of transparency about Lama’s training data.
Despite this, Meta still describes Llama as open source.
Meta no longer prohibits military use of its AI models. QubixStudio/Shutterstock
The intersection of the tech industry and the military
Meta is not the only commercial technology company focusing on military applications of AI. This past week, Anthropic also announced that it is partnering with Palantir – a data analytics company – and Amazon Web Services to give US intelligence and defense agencies access to its AI models.
Meta has defended its decision to allow US national security agencies and defense companies to use Llama. The company claims that these applications are “responsible and ethical” and “support the prosperity and security of the United States.”
Meta has not been transparent about the data it uses to train llamas. But companies developing generative AI models often use user input data to further train their models, and people share a lot of personal information when they use these tools.
ChatGPT and Dall-E offer options to opt out of having your data collected. However, it is unclear whether Llama offers the same.
The ability to opt out is not explicitly made clear when you sign up to use these services. This puts the onus on users to inform themselves – and most users may not know where or how Lama is used.
For example, the latest version of Llama supports AI tools on Facebook, Instagram, WhatsApp and Messenger. When using the AI features on these platforms – such as creating reels or suggesting subtitles – users use Llama.
Llama powers AI tools in apps like Facebook, Instagram and WhatsApp. Alexandra Popova/Shutterstock
The vulnerability of open source
The benefits of open source include open participation and collaboration in software. However, this can also lead to vulnerable systems that are easy to manipulate. For example, following Russia’s invasion of Ukraine in 2022, members of the public made changes to the open source software to express their support for Ukraine.
These changes included anti-war messages and the deletion of system files on Russian and Belarusian computers. This movement became known as ‘protestware’.
The intersection of open source AI and military applications will likely exacerbate this vulnerability because the robustness of open source software relies on the public community. In the case of large language models like Llama, they require public use and engagement because the models are designed to improve over time through a feedback loop between users and the AI system.
The mutual use of open source AI tools brings together two parties – the public and the military – who historically have different needs and goals. This shift will reveal unique challenges for both parties.
For the military, open access means that the finer details of how an AI tool works can be easily obtained, potentially leading to security and vulnerability issues. For the general public, the lack of transparency about how user data is used by the military could lead to a serious moral and ethical dilemma.