This Is A Story About Power
The Pentagon vs. Anthropic
The following is a story about power. Last Friday, the Trump administration broke with Anthropic after the company refused to agree to new terms imposed by the Department of War, demanding it to agree to “all lawful uses”, including mass domestic surveillance and fully autonomous weapons1.
After standing their ground — principally rejecting the terms that were offered —Pete Hegseth and President Trump took to Twitter to call the company woke and un-American, and announced that the administration will designate Anthropic as a “supply chain risk”; a measure historically reserved for US adversaries, and never before publicly applied to an American company.
CEO Dario Amodei characterized the actions as retaliatory and punitive, but ultimately said the company would be fine.
Less than 24 hours after that, OpenAI announced they succeeded at making a deal with the Department of War, which they claimed was not just the same as Anthropic tried to negotiate for themselves, but better. In a lengthy blog post, they laid out in detail what they agreed to but a) despite everything, it looks like they in fact agreed to the line in the contract about “all lawful uses”, which Anthropic rejected, and b) the timing of the deal was revealing to say the least.
Publicly, Altman had defended Anthropic for standing their ground, behind close doors, Altman inked a deal that effectively meant taking Anthropic’s place.
Did OpenAI really have better negotiators than Anthropic — or, did being a Trump MAGA mega donor have something to do with it? In case you didn’t know, not too long it was reported that Greg Brockman, President of OpenAI, and his wife Anna’s made $25 million in donations to a Trump super PAC.
I also thought the language in Hegseth’s tweet accusing Anthropic of “corporate virtue signaling” that was “cloaked in sanctimonious rhetoric of ‘effective altruism’”, were curious words for a man who previously worked as the co-host of Fox & Friends. One wonders if someone whispered something about effective altruism and leftism, in passing, to the right people at the right time.
I guess we’ll never know.
What we do know, though, is that AI has made it to the center of power. As we speak, the technology is being used and integrated deeply into military and the broader state apparatus. And if that claim was in need of any evidence, let it be known that Anthropic’s Claude models were used in the Iran strikes that happened over the weekend as well as the Venezuela raid.
It’s clear that now and in the future, AI will be used to fight wars, surveil domestic populations, and for cyber warfare (both defensive and offensive). Anthropic’s story is just the first in many: a story about a private AI company that has to choose who it wants to be, even that means going against your own government. What is it that they say again; a principle is only a principle when it costs you something?
Fortunately for Anthropic, there’s a silver lining. The public seemed to be thoroughly impressed by CEO Dario’s Amodei’s handling of it all, and simultaneously, a lot of people judged Sam Altman’s words and actions as disingenuous.
It led OpenAI loyalists openly posting on Reddit that they were cancelling their subscriptions, as well as on Twitter, including celebrities, like … Katy Perry?
At the time of writing, the momentum seems to hold. The number of people switching to Claude is so big that as a result the Claude app has risen to number 1 in the App Store, topping Google’s Gemini and OpenAI’s ChatGPT.
All the best,
— Jurgen
New details on precisely where the lines were drawn have emerged, as reported by The Atlantic: https://www.theatlantic.com/technology/2026/03/inside-anthropics-killer-robot-dispute-with-the-pentagon/686200/






Looking back: What would Isaac Asimov say to Trump/Heggy?
Looking ahead: Where do I buy a robot that can defend my front porch from a phalanx of dogbots & terminators?
Well, I hope they go public soon so the people can have a vote.