Earlier this month, the company that brings us ChatGPT announced its partnership with California-based weapons company, Anduril, to produce AI weapons. The OpenAI-Anduril system, which was tested in California at the end of November, permits the sharing of data between external parties for decision making on the battlefield. This fits squarely within the US military and OpenAI’s plans to normalize the use of AI on the battlefield.
By Nuvpreet Kalra and Tim Biondo
Anduril, based in Costa Mesa, makes AI-powered drones, missiles, and radar systems, including surveillance towers, Sentry systems, currently used at US military bases worldwide as well as the US-Mexico border and on the British coastline to detect migrants on boats. On December 3rd, they received a three-year contract with the Pentagon for a system that gives soldiers AI solutions during attacks.
In January, OpenAI deleted a direct ban in their usage policy on “activity that has high risk of physical harm” which specifically included “military and warfare” and “weapons development.” Less than one week after doing so, the company announced a partnership with the Pentagon in cybersecurity.
While they might have removed a ban on making weapons, OpenAI’s lurch into the war industry is in total antithesis to its own charter. Their own proclamation to build “safe and beneficial AGI [Artificial Generative Intelligence]” that does not “harm humanity” is laughable when they are using technology to kill. ChatGPT could feasibly, and probably soon will, write code for an automated weapon, analyze information for bombings, or assist invasions and occupations.
We should all be frightened by this use of AI for death and destruction. But this is not new. Israel and the US have been testing and using AI in Palestine for years. In fact, Hebron has been dubbed a “smart city” as the occupation enforces its tyranny through a perforation of motion and heat sensors, facial recognition technologies, and CCTV surveillance. At the center of this oppressive surveillance is the Blue Wolf System, an AI tool that scans the faces of Palestinians, when they are photographed by Israeli occupation soldiers, and refers to a biometric database in which information about them is stored. Upon inputting the photo into the system, each person is classified by a color-coded rating based on their perceived ‘threat level’ to dictate whether the soldier should allow them to pass or arrest them. The IOF soldiers are rewarded with prizes for taking the most photographs, which they have termed “Facebook for Palestinians”, according to revelations from the Washington Post in 2021.
OpenAI’s war technology comes as the Biden administration is pushing for the US to use the technology to “fulfill national security objectives.” This was in fact part of the title of a White House memorandum released in October this year calling for rapid development of artificial intelligence “especially in the context of national security systems.” While not explicitly naming China, it is clear that a perceived ‘AI arms race’ with China is also a central motivation of the Biden administration for such a call. Not solely is this for weapons for war, but also racing for the development of technology writ large. Earlier this month, the US banned the export of HBM chips to China, a critical component of AI and high-level graphics processing units (GPU). Former Google CEO Eric Schmidt warned that China is two to three years ahead of the US when it comes to AI, a major change from his statements earlier this year where he remarked that the US is ahead of China. When he says there is a “threat escalation matrix” when there are developments in AI, he reveals that the US sees the technology only as a tool of war and a way to assert hegemony. AI is the latest in the US’ unrelenting – and dangerous – provocation and fear mongering with China, who they cannot bear to see advance them.
In response to the White House memorandum, OpenAI released a statement of its own where it re-asserted many of the White House’s lines about “democratic values” and “national security.” But what is democratic about a company developing technology to better target and bomb people? Who is made secure by the collection of information to better determine war technology? This surely reveals the alignment of the company with the Biden administration’s anti-China rhetoric and imperialist justifications. As the company that has surely pushed AGI systems within general society, it is deeply alarming that they have ditched all codes and jumped right in with the Pentagon. While it’s not surprising that companies like Palantir or even Anduril itself are using AI for war, from companies like OpenAI – a supposedly mission-driven nonprofit – we should expect better.
AI is being used to streamline killing. At the US-Mexico border, in Palestine, and in US imperial outposts across the globe. While AI systems seem innocently embedded within our daily lives, from search engines to music streaming sites, we must forget these same companies are using the same technology lethally. While ChatGPT might give you ten ways to protest, it is likely being trained to kill, better and faster.
From the war machine to our planet, AI in the hands of US imperialists means only more profits for them and more devastation and destruction for us all.
Nuvpreet Kalra is CODEPINK’s Digital Content Producer. Nuvpreet completed a Bachelor’s in Politics & Sociology at the University of Cambridge, and an MA in Internet Equalities at the University of the Arts London. As a student, she was part of movements to divest and decolonize, as well as anti-racist and anti-imperialist groups. Nuvpreet joined CODEPINK as an intern in 2023, and now produces digital and social media content. In England, she organizes with groups for Palestinian liberation, abolition and anti-imperialism.
Tim Biondo is the digital communications manager for CODEPINK. They hold a bachelor’s degree in Peace Studies from The George Washington University.