On Trust and AI — Applied

Who Holds Our Trust?

The Pentagon’s AI Showdown with Anthropic and OpenAI

On February 27, 2026, an artificial intelligence company worth $380 billion did something almost no defense contractor has ever done: it walked away from the Pentagon.

Anthropic, maker of the Claude AI chatbot, refused to remove two safety restrictions from a $200 million contract with the U.S. Department of Defense. Within hours, the government designated it a “supply chain risk” — a label previously reserved for foreign adversaries. Within hours of that, rival OpenAI announced it would take the deal. The next day, the U.S. military used the banned AI in combat strikes against Iran anyway.

And then something unexpected happened. Millions of people switched sides.

Claude surged to #1 on the Apple App Store. A boycott movement called #CancelChatGPT claimed 2.5 million participants. Anthropic’s annualized revenue jumped from $14 billion to $19 billion in weeks. Supporters scrawled messages in chalk outside Anthropic’s headquarters. One read: “God loves Anthropic.”

This is the story of what happens when ethics collides with the military-industrial complex in the age of AI — told through the data, the deals, and the decisions of the people involved.

1. The Contract

In July 2025, the Pentagon awarded Anthropic a prototype contract worth up to $200 million to deploy Claude on classified military networks. Anthropic had been building toward this moment: in late 2024, the company struck a deal through Palantir’s products, and by mid-2025 Claude was the only advanced AI model available on classified military networks.

The scope was broad. Pentagon personnel used Claude for intelligence analysis, modeling, simulation, operational planning, cyber operations, document synthesis, logistics, supply chain optimization, and satellite image analysis. Enterprise sales made up roughly 80% of Anthropic’s revenue.

CEO Dario Amodei is not a pacifist. He wrote that AI should support national defense “in all ways except those which would make us more like our autocratic adversaries.” But he drew two lines.

Red line one: no fully autonomous weapons. Amodei argued that “frontier AI systems are simply not reliable enough” for “life-or-death targeting” because they “behave unpredictably in novel scenarios,” risking “friendly fire, mission failure or unintended escalation.”
Red line two: no mass domestic surveillance of Americans. AI could build “population-level profiles that no law explicitly prohibits but that clearly violate the spirit of constitutional protections.”

The Pentagon disagreed — not about the substance, but about the principle. A January 9 memo directed all AI contracts to incorporate “any lawful use” language within 180 days. Undersecretary Emil Michael argued Anthropic’s technology should be treated like Microsoft Excel — bound only by U.S. law, not by company usage policies. Officials refused to put their assurances in writing as contractual restrictions.

Many close to the situation say stylistic differences intensified the problem. Claude would refuse to engage in war-gaming scenarios, frustrating personnel. One person described the breakdown as “an ego and diplomacy problem.”

2. The Ultimatum

On February 24, Defense Secretary Pete Hegseth met with Amodei and delivered a 72-hour ultimatum: comply by 5:01 PM Friday, or face consequences. He threatened to terminate the contract, designate Anthropic a supply chain risk, and invoke the Defense Production Act.

“Regardless, these threats do not change our position: we cannot in good conscience accede to their request.”
— Dario Amodei, February 26

The Pentagon’s response was personal. Undersecretary Michael posted on X that Amodei was “a liar” with “a God-complex” who “wants nothing more than to try to personally control the US Military.”

More than 200 Google and OpenAI employees backed Anthropic in an open letter. Anthropic had just closed a $30 billion round at $380 billion. Its revenue run rate was $14 billion and climbing. Amodei let the deadline pass.

3. The Retaliation

On February 27, Trump posted on Truth Social:

“We don’t need it, we don’t want it, and will not do business with them again!”
— Donald Trump, Truth Social

He directed agencies to stop using Anthropic, allowing six months to transition. Hegseth then designated Anthropic a “supply chain risk” — the first time ever applied to an American company.

Under 10 U.S.C. § 3252, the label is designed for foreign adversaries — the “risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert” a system. It was created for companies beholden to Beijing or Moscow, not for American innovators operating under the rule of law.

Hegseth also threatened the Defense Production Act, a Cold War-era statute from 1950. Legal scholars at Lawfare called this an “enormous escalation” — the compulsion authorities largely untested since the Korean War — and noted that forcing safety restrictions from an AI model would raise novel First Amendment questions.

The ironies: Republicans had criticized Biden’s far more modest DPA use. Hegseth was threatening action “orders of magnitude more coercive.” And it was hard to square calling Anthropic a supply chain risk with the premise that its technology was so essential the government needed to compel access.

Amodei responded: “We do not believe this action is legally sound, and we see no choice but to challenge it in court.” The actual scope proved narrower than implied — affecting only direct DoD contract work, not all commercial use. Microsoft confirmed it could continue using Anthropic for non-defense clients.

4. The Iran Paradox

The next day — February 28 — the United States and Israel launched Operation Epic Fury, striking more than 2,000 targets in Iran.

Two CBS News sources confirmed the military used Claude during the operation for intelligence assessments, target identification, battlefield simulations, and satellite image analysis.

Trump ordered the ban on Friday. The military used Claude in combat on Saturday.

The Pentagon acknowledged that withdrawing from Claude was “operationally infeasible.” Defense One reported it could take three months or more to replace Claude’s capabilities.

The contradiction was stark. If Claude was reliable enough for targeting 2,000 strikes in Iran, did that undermine the argument that Anthropic was a supply chain risk? And if Anthropic was a genuine supply chain risk, why was the military using it in the most sensitive operations imaginable?

The deeper irony: the Iran strikes involved human oversight assisted by AI — exactly the model Anthropic had been advocating for.

5. The Opportunist

Hours after Anthropic was punished, OpenAI announced a deal to replace Claude in classified environments. The $200 million contract included three restrictions: no mass surveillance, no autonomous weapons, no “social credit” scoring. OpenAI claimed “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.”

The hypocrisy was not subtle. OpenAI’s restrictions closely mirrored what Anthropic had been demanding. The difference was mechanical: Anthropic insisted on contractual language; OpenAI agreed to the Pentagon’s “all lawful purposes” standard and layered safeguards on top.

“Opportunistic and sloppy.”
— Sam Altman on OpenAI’s own Pentagon deal, March 3

By March 3, Altman was renegotiating to add Fourth Amendment and FISA protections, and barred the NSA from using OpenAI without a separate contract.

Elon Musk’s xAI also entered — signing the first deal to accept full “all lawful purposes” terms for classified use. But replacing Claude would take months: Grok had not been tested at the same scale.

6. The Public Verdict

The market rendered a verdict the government had not anticipated.

On February 28, Claude hit #1 on the U.S. iOS App Store for the first time. It became the top AI app in more than 20 countries. ChatGPT dropped to #2.

Claude’s daily downloads had averaged 62,000 in mid-January. After the Super Bowl ad on February 8, they spiked to 225,000 — a 3.6× jump. After the Pentagon conflict, Anthropic reported all-time record sign-ups every single day of the week of February 28. More than one million people were signing up per day.

Free active users increased more than 60% since January. Paid subscribers more than doubled. U.S. daily active user market share nearly tripled: 1.5% in January to ~4% in February. Churn improved from 55% (August 2025) to 36% (February 2026).

The boycott was organized and loud. QuitGPT claimed 2.5 million participants. TechCrunch reported ChatGPT uninstalls surged 295%.

But context matters. ChatGPT still had over 900 million weekly active users. The question was whether this moment would catalyze sustained growth or prove to be a spike.

The enterprise picture told a different story.

By February 2026, Anthropic commanded approximately 65% of combined Anthropic-plus-OpenAI business spending — up from just 10% at the start of 2025. More than 500 customers spent $1 million or more per year. Claude Code alone was generating $2.5 billion in annualized revenue.

A critical nuance: Ramp data showed Anthropic was not winning by stealing OpenAI’s customers. Roughly 79% of Anthropic customers also maintained OpenAI subscriptions. Sixteen percent of businesses paid for both — doubled year-over-year. The AI market was expanding, not reshuffling.

7. The Industry Rallies

Hundreds of tech workers signed an open letter urging the DoD to withdraw the designation. Signatories came from OpenAI, Slack, IBM, Cursor, and Salesforce Ventures. The letter warned:

“Punishing an American company for declining to accept changes to a contract sends a clear message to every technology company in America: accept whatever terms the government demands, or face retaliation.”
— Open letter, 200+ tech workers

The IT Industry Council — Nvidia, Amazon, Apple — wrote to Hegseth that the designation “threatens to undermine the government’s access to the best-in-class products and services from American companies.”

Even inside OpenAI, researcher Boaz Barak wrote that mass domestic surveillance is his “personal red line” and “it should be all of ours.”

Congressional reaction was bipartisan. Senator Markey called the designation “reckless and unprecedented.” Senator Coons compared the approach to “Xi Jinping’s.” Senator Gillibrand: “The government openly attacking an American company for refusing to compromise its own safety measures is something we expect from China, not the United States.”

Former CIA director Michael Hayden and retired military leaders wrote to Congress: “Applying this tool to penalize a U.S. firm for declining to remove safeguards against mass domestic surveillance and fully autonomous weapons is a category error with consequences that extend far beyond this dispute.”

8. The Question

As Lawfare argued, the deeper problem is that Congress has not set substantive rules for military AI. In the absence of legislation, rules are set by confrontation between companies and the executive branch.

The market is rewarding Anthropic. Revenue surged to $19 billion. Claude topped the App Store. Enterprise spending share climbed to 65%. More than a million people signed up per day.

But the government is punishing it. The supply chain risk designation stands. Defense contractors are removing Anthropic’s AI. Lockheed Martin said it would look to other providers. The legal challenge is unresolved.

Both companies are expected to pursue public offerings in 2026. Anthropic is valued at $380 billion; OpenAI at $500 billion. How this resolves will define their IPO narratives.

The facts carry their own argument. An American company refused to remove two safety restrictions. The government threatened it with a Cold War-era statute, labeled it a national security risk for the first time in history, and used its technology in combat the very next day. Its competitor took the deal, then admitted it was “opportunistic and sloppy.” The public rewarded the company the government punished. The industry rallied to its defense.

“Disagreeing with the government is the most American thing in the world. And we are patriots.”
— Dario Amodei

The question is not whether that sentiment is noble. The question is whether it is sustainable — and who gets to decide.


This story is one of the defining case studies for the themes in On Trust and AI. Who holds trust when AI systems become powerful enough to shape national security? How do we design accountability when the stakes are this high? The book’s frameworks — trust boundaries, the decision perimeter, verification as product — offer a lens for thinking through exactly these questions. Explore the book reference.

Explore the Full Timeline & Analysis

Dig into the interactive timeline, revenue data, app store rankings, download trends, enterprise adoption, and valuations behind this story.

Open Interactive Analysis

All factual claims, data points, and quotes are sourced from primary reporting by Reuters, CBS News, AP News, BBC News, Fortune, TechCrunch, Lawfare, Ramp, and Apptopia.

Comments

Loading comments...

Subscribe

Create a free account for new articles, exclusive content, and commenting.