The Nov Tech

The Nov Tech

Nobody Controls This War

Claude was inside CENTCOM when the bombs fell on Iran. Nobody could turn it off in time.

Novy Baf's avatar
Novy Baf
Mar 09, 2026
∙ Paid
Image made by Novy Bafouka

On February 28, 2026, the United States and Israel launched Operation Epic Fury: nearly 900 coordinated strikes in the first 12 hours, targeting Iranian missile infrastructure, nuclear facilities, and the compound of Supreme Leader Ali Khamenei, who was killed in the opening wave alongside dozens of senior officials. Midmorning on February 28, U.S. and Israeli forces began conducting joint strikes on Iran, numbering nearly 900 in just the first 12 hours of what the United States dubbed Operation Epic Fury.

Inside CENTCOM’s command systems, an artificial intelligence model was running.

That tool was a variant of Claude, made by Anthropic — the same company whose products millions of people used to summarize PDFs and draft emails. The military version, sometimes referred to internally as Claude Gov, shares its technical foundation with the consumer model but runs on classified networks. Claude became the first major model deployed in the government’s classified networks through a $200 million contract with the DoD. Anthropic describes its military uses as strategic planning, operational support, intelligence analysis, and threat assessment.

The bombs were falling. And the AI was still in the system.

The Contract, the Red Line, and the Ultimatum

This path to that moment began in July 2025, when Anthropic signed a $200 million prototype contract with the Pentagon, the first agreement of its kind to place a frontier AI model on classified government networks. The contract was signed with the full knowledge that it included two specific restrictions: Claude would not be used for mass domestic surveillance of American citizens, and would not be used to power fully autonomous weapons. The Pentagon agreed to those terms. Operations proceeded normally for months.

Then the terms became a problem.

Defense Secretary Pete Hegseth issued an internal AI strategy memo in early 2026. It required that all Department of Defense AI contracts permit the technology’s application for “any lawful purpose,” omitting the specific exclusions that Anthropic had drafted. The Pentagon, which has a $200 million contract with Anthropic, wants the company to lift its restrictions for the military to use the model for “all lawful use,” according to two sources familiar with the discussions.

On February 24, Hegseth summoned Dario Amodei, Anthropic’s CEO, to the Pentagon and gave him the terms directly: remove the restrictions or lose the contract. The threat went further than a simple cancellation.

Photo by Crawford Jolly on Unsplash

Hegseth will also label Anthropic a supply chain risk, the official said.

The DPA is a law that gives the government the ability to influence businesses in the interest of national defense. A supply chain risk designation would effectively prohibit any company holding military contracts from using Anthropic’s technology in its work with the Pentagon, threatening to cascade through much of Anthropic’s enterprise customer base.

On February 26, Amodei responded publicly. He wrote that Anthropic believes deeply in AI’s importance for defending democracies, but that threats would not change the company’s position. According to Amodei, speaking with CBS News, Anthropic has pursued military applications for its AI models, stating, “We are patriotic Americans” and “We believe in this country.”

Trump called the company a “radical left, woke company.” Hegseth called it “sanctimonious.” Under-Secretary for Research and Engineering Emil Michael said that Amodei has a “God-complex,” and that Anthropic’s “true objective is unmistakable: to seize veto power over the operational decisions of the United States military.”

The Friday evening deadline passed without agreement. Trump ordered federal agencies to stop using Anthropic’s products. The supply chain risk designation was formally applied.

Two days later, the bombs fell on Iran. Claude was still in the system. Agreeing to a new contract would enable the U.S. military to continue using Anthropic’s technology, which has reportedly been utilized in Washington’s war with Iran. Removing a software model from classified military infrastructure is not a weekend task. The political order and the operational reality were not synchronized.

What Anthropic’s Red Line Actually Covered — and What They Didn’t

Here is where the story requires more precision than most coverage has offered.

Anthropic’s two restrictions were meaningful. They were also narrower than they might appear.

The first restriction prohibited Claude’s use for mass domestic surveillance of American citizens. Not foreign citizens. Not the allied nations’ populations. American citizens specifically.

In his statement, Amodei said contracts with the Department of War should not include instances where Claude is deployed for mass domestic surveillance and integrated into fully autonomous weapons. What happens outside that perimeter — foreign intelligence, bulk data analysis of non-American populations, real-time signals processing across conflict zones — was not addressed.

The second restriction prohibited fully autonomous weapons: systems that select targets and execute lethal action without human decision-making in the loop. Amodei said his company isn’t categorically opposed to those kinds of weapons, especially if U.S. adversaries develop them, but “the reliability is not there yet” and “we need to have a conversation about oversight.” The restriction covered full autonomy. It did not cover AI-assisted targeting with a human present to validate.

What took place during Operation Epic Fury is not necessarily outside the terms of the contract Anthropic had already signed.

OpenAI’s Deal, and the Gap That Only Looks Like Protection

Within hours of the Trump administration’s decision to designate Anthropic a supply chain risk, OpenAI announced a new classified systems agreement with the Pentagon. Sam Altman said publicly that his company shared Anthropic’s concerns about mass surveillance and autonomous weapons. OpenAI CEO Sam Altman later said that his company “shouldn’t have rushed” its deal and outlined revisions to its own safeguards with how the Defense Department can use its technology.

Image made by Novy Bafouka

The structural difference between the two companies’ approaches is significant, and Amodei addressed it directly in a memo to staff. Amodei referred to OpenAI’s dealings with the Department of Defense as “safety theater.”

“The main reason [OpenAI] accepted [the DoD’s deal], and we did not, is that they cared about placating employees, and we actually cared about preventing abuses,” Amodei wrote.

Anthropic wanted explicit prohibitions written into the contract text:

Specific language stating that certain uses are not permitted. Rather than a regulatory method, OpenAI chose to employ agreements, established legal frameworks, and its technical infrastructure for usage control. OpenAI stated in a blog post that its AI systems can be utilized for “all lawful purposes” as per its contract.

Critics noted a specific problem with the “lawful use” framework: the law is subject to change. What is illegal today may be authorized tomorrow. Critics have pointed out that the law is subject to change, and what is considered illegal now might end up being allowed in the future.

There is a structural precedent for this concern. In 2013, Edward Snowden revealed that the NSA had been collecting millions of records on American citizens without their individual knowledge or consent, operating within a legal framework that permitted bulk surveillance programs under classified rulings. The former director of the NSA is currently a board member at OpenAI. Despite acknowledging the two specific limitations within the contract, it’s been noted by critics that OpenAI’s legal foundation operates under the same legal principles that previously regulated NSA surveillance initiatives. None of this makes OpenAI’s approach necessarily bad. It does make the “lawful use” guarantee more contingent than its framing suggests.

And even when an AI company writes explicit prohibitions into its contract, enforcement is not guaranteed. An Anthropic employee allegedly expressed unease about Claude’s involvement in the Venezuela raid via a Palantir contract. It’s also noted that Anthropic’s systems had been operating on classified military networks before the Epic Fury incident brought this partnership into the public eye.

User's avatar

Continue reading this post for free, courtesy of Novy Baf.

Or purchase a paid subscription.
© 2026 The Nov Tech · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture