OpenAI Revises Pentagon Pact Following Ethics Backlash
OpenAI is amending its new agreement with the US Department of War following a swift and severe public backlash over the use of its technology in classified military operations. Chief Executive Sam Altman admitted on Monday that the rollout was “opportunistic and sloppy,” prompting the company to insert explicit prohibitions against using its AI for domestic surveillance. The move comes as users defect to rival platforms in protest, forcing OpenAI to clarify its “red lines” regarding military engagement.
The deal, signed last Friday, was struck in the wake of a public fallout between the Pentagon and OpenAI’s rival, Anthropic. The Trump administration blacklisted Anthropic after its CEO, Dario Amodei, refused to allow the “Claude” AI model to be used for mass surveillance or in fully autonomous weapons systems. OpenAI immediately moved to fill the vacuum, but the optics of “swooping in” triggered a 295% surge in ChatGPT app uninstalls over the weekend.
Under the new amendments, Sam Altman stated that OpenAI systems will not be “intentionally used for domestic surveillance of U.S. persons.” Furthermore, intelligence agencies like the National Security Agency (NSA) are now barred from using the technology without a specific “follow-on modification” to the contract. These adjustments are intended to de-escalate tensions within the tech community and among employees who fear the company is straying from its non-profit roots toward becoming a “war machine.”
Despite the administration’s ban, reports emerged on Tuesday that the US military used Anthropic’s Claude to support Operation Epic Fury, a joint US-Israeli strike on Iran that resulted in the death of Supreme Leader Ayatollah Ali Khamenei. Military officials noted that while a six-month phase-out is planned, Claude remains “embedded” in classified networks for target identification and battlefield simulation. This overlap underscores the difficulty of detaching complex AI tools from live combat operations once they are integrated.
While OpenAI and Anthropic battle over ethical “red lines,” other firms are deepening their military footprints. Palantir recently secured a £240 million contract with the UK Ministry of Defence to integrate its “Maven” AI platform across NATO operations. Unlike Anthropic, Palantir does not support a blanket ban on autonomous weapons, preferring a “human in the loop” approach. Critics argue that with Anthropic’s exit, the most safety-conscious actors are being removed from the rooms where lethal decisions are made.
The controversy highlights a growing schism in Silicon Valley over the morality of AI in warfare. OpenAI’s attempt to retroactively add “guardrails” suggests a company struggling to balance its commercial ambitions with its safety mission. For now, the “opportunistic” tag remains. As Sam Altman prepares for an all-hands meeting on Wednesday to address staff concerns, the industry remains divided on whether AI should be the brain of the battlefield or a strictly civilian tool.
