Anthropic vs. the Pentagon: the First Amendment case that will define AI ethics for a decade
In July 2025, Anthropic signed a $200 million contract with the Pentagon. Claude would run on classified networks. It was the kind of deal that validated everything the AI safety crowd had been arguing — that you could build powerful AI and still maintain guardrails on how it gets used.
Eight months later, that same company became the first American business in history to be designated a supply chain risk by the Department of Defense. A label previously reserved for foreign adversaries like Huawei and Kaspersky. For an American AI company that was actively deployed across U.S. military systems.
What happened between July and February is worth paying attention to.
Two red lines
The dispute started simply enough. By September 2025, Anthropic was negotiating Claude's deployment on GenAI.mil, the Pentagon's AI platform. The DOD wanted unrestricted access to Claude across all lawful purposes. Anthropic drew two lines.
First: Claude would not be used for mass surveillance of American citizens. Second: Claude would not power fully autonomous weapons systems — weapons that select and engage targets without a human making the final call.
Neither of these was a radical position. Both align with existing DOD policy on autonomous weapons. The Pentagon's own Directive 3000.09 requires "appropriate levels of human judgment" in the use of force. Anthropic was basically asking DOD to put in writing what its own policy already says.
The DOD refused.
From contract dispute to presidential directive
In January 2026, DOD told Anthropic to grant unrestricted access or face consequences. Defense Secretary Pete Hegseth set a final deadline of February 27.
Anthropic held its position.
On February 27, Trump posted on Truth Social directing federal agencies to "IMMEDIATELY CEASE all use of Anthropic's technology." Hours later, Hegseth designated Anthropic a supply chain risk under 10 U.S.C. § 3252, a statute written to protect against adversaries who might "sabotage, maliciously introduce unwanted function, or otherwise subvert" government systems.
The same day, OpenAI announced it had struck a deal with the Pentagon to provide its own models for classified networks.
The legal theory that didn't hold up
Anthropic filed suit. The hearing, on March 24 in San Francisco, did not go well for the government.
Judge Rita Lin pressed DOD attorneys on the basis for the supply chain risk designation. The government's own internal files showed the designation wasn't triggered by any security assessment. An internal Pentagon memo referenced Anthropic's "increasingly hostile manner through the press" — not any technical vulnerability or espionage concern.
Two days later, Judge Lin issued a preliminary injunction blocking the designation. Her language was pointed:
"Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation."
And:
"Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government."
She also found DOD violated Anthropic's due process rights by giving no advance notice and no opportunity to respond before the ban took effect.
Legal scholars at Just Security and Lawfare had already argued the designation stretched § 3252 well past its intended scope. The statute covers procurement exclusions for national security systems. Hegseth's directive prohibited defense contractors from conducting "any commercial activity" with Anthropic, which looks a lot more like sanctions authority. Congress never granted DOD that power.
The leaked memo and the market reaction
Between the ban and the ruling, things got messy.
On March 4, an internal memo from Dario Amodei leaked. In it, Anthropic's CEO called OpenAI's messaging around the Pentagon deal "straight up lies," referred to Altman's public statements as "safety theater," and characterized OpenAI employees as "gullible." Two days later, Amodei publicly apologized for the tone while maintaining Anthropic's legal position.
The market reaction was immediate. ChatGPT uninstalls surged 295% day-over-day after OpenAI's Pentagon announcement. One-star reviews jumped 775%. Claude hit #1 on the U.S. App Store. Over 1.5 million users joined the "QuitGPT" movement.
People cared about the terms, not just the capability. That was the surprising part.
Why this isn't Google Maven 2.0
People keep comparing this to Google's 2018 Project Maven crisis. I don't think the comparison holds.
Maven was about whether a tech company should work with the military at all. Thousands of Google employees signed a letter. Google pulled out. The systems Maven built were image classifiers for drone footage.
Anthropic wasn't refusing to work with the Pentagon. It had a $200 million contract and wanted to keep it. The fight was over terms of deployment: what should an AI system that can reason, plan, and execute multi-step operations be allowed to do without a human in the loop?
In 2018, the AI was a classifier. In 2026, the AI is an agent. The question moved from "should tech help the military" to "under what conditions can autonomous AI systems operate in military contexts." Harder question. Higher stakes.
The part nobody wants to hear
The Electronic Frontier Foundation, along with the Cato Institute and FIRE, filed an amicus brief supporting Anthropic. But the EFF made a broader point that both sides find uncomfortable.
Privacy protections shouldn't depend on which CEO happens to care about them.
Anthropic drew a line. Good. But if Amodei gets hit by a bus tomorrow, or if the board decides the DOD revenue is too important to lose, those protections vanish. 71% of American adults say they're concerned about government data use. 70% of adults aware of AI have minimal trust in corporate data practices. People want laws, not corporate goodwill.
Amodei himself acknowledged this. "I actually do believe it is Congress's job" to address surveillance risks posed by AI, he told reporters.
The Fourth Amendment Not for Sale Act — which would close the data broker loophole that lets intelligence agencies purchase surveillance data without warrants — passed the House in 2024 and stalled in the Senate. That loophole is directly relevant here: military and intelligence agencies already routinely purchase commercial data to enable broad surveillance without judicial oversight.
What happens next
The case is now on two parallel tracks. The DOJ filed an appeal to the Ninth Circuit on April 2, with a filing deadline of April 30. A separate challenge to the § 4713 designation is pending in the D.C. Circuit. Both have to be resolved for the matter to be fully settled.
Hours after Judge Lin's ruling, Pentagon CTO Emil Michael posted on X claiming the supply chain risk designation was "in full force and effect" under a different statutory authority than the one Lin blocked. That kind of jurisdictional maneuvering suggests the administration isn't backing down.
The Ninth Circuit ruling will likely come by mid-summer. If it upholds Lin's injunction, it establishes that the government can't use procurement designations to punish companies for public speech. That precedent extends well beyond AI. If it overturns the injunction, it signals that national security designations can be weaponized against domestic companies that disagree with government policy.
Either outcome sets a template. Every AI company negotiating government contracts is watching this. Every general counsel is recalibrating what they can and can't say publicly about how their technology gets used. The chilling effect is already operating, independent of which way the Ninth Circuit rules.
What this means if you build AI systems
I keep coming back to this: Anthropic's two red lines are things we talk about all the time in non-military contexts.
We've written about why human-in-the-loop is the production architecture for agent systems. Not because autonomy is impossible, but because the consequences of fully autonomous execution in high-stakes contexts are too severe to leave to a model alone. Anthropic's position on autonomous weapons is the military version of that same argument.
And the security problems with agent permissions don't get better when the customer is the federal government. An AI agent with unrestricted access to classified systems and no contractual guardrails on surveillance is exactly the kind of risk that 48% of CISOs are already calling their top concern for 2026.
The question this case is really about: do AI companies get to define the boundaries of their own technology's deployment, or can governments compel unrestricted access through economic coercion?
Judge Lin gave a clear answer. The Ninth Circuit will give a more permanent one this summer.
If you're interested in early access, reach out at hintas.com.
Photo by Shiona Das on Unsplash

