OpenAI, Anthropic, and the fog of AI war
After Anthropic refused to bow to Trump administration demands, the Pentagon labeled it a supply-chain risk — yet bombed Iran while still using its tools

hoto by Fatemeh Bahrami/Anadolu via Getty Images
President Donald Trump ordered every federal agency on Friday afternoon to stop using Anthropic’s AI technology. That evening, the Pentagon labeled the company a supply-chain risk, a designation that’s normally reserved for Chinese firms suspected of espionage, and one that means it could force any company doing business with the Defense Department to prove it doesn't use Anthropic's tools.
The very next day, the U.S. struck Iran with Anthropic's tools still running inside the military's Middle East headquarters, Central Command, using them for targeting and intelligence systems. Trump had granted agencies six months to phase out the technology, a tacit acknowledgment that you can’t rip AI from military operations overnight.
The rupture between the administration and Anthropic is nominally about guardrails. The company said it refused to let its tools be used for autonomous weapons or mass surveillance and wouldn't budge when officials demanded blanket permission to use the technology in any lawful scenario. Anthropic CEO Dario Amodei said the company couldn’t agree in good conscience. Trump responded by calling Anthropic a “radical-left, woke company” that would never dictate how the military fights.
Within hours of the ban, OpenAI announced a new deal to deploy its models in classified Pentagon settings. OpenAI CEO Sam Altman disclosed a notable detail: The agreement includes the same prohibitions on mass surveillance and autonomous weapons that Anthropic had sought. The Pentagon, he wrote on X $TWTR, “agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
So the company that got blacklisted and the company that got rewarded appear to have secured functionally similar terms. The difference is most likely politics, or more precisely, the perception of obedience this administration seems to require from the private sector. OpenAI’s president gave $25 million to a pro-Trump super PAC last year. Anthropic hired Biden administration officials and lobbied for AI regulation.
As one former military AI official from Trump’s first term put it: Anthropic is paying the price for not bowing down.
What we don’t know is worse than what we do
The political maneuvering would matter less if it weren’t playing out against actual warfare. The Wall Street Journal reported that Claude, Anthropic’s AI, was embedded in Saturday’s Iran operation, being used for intelligence assessments, target identification, and battle simulations. Central Command declined to comment on specific systems involved in ongoing operations.
Then came a harder question. When a mis-targeting incident reportedly killed more than 150 schoolchildren in Iran, outside observers immediately asked whether AI could have contributed to the error. The honest answer is that nobody outside the Pentagon knows, and the Pentagon isn’t saying. Defense Secretary Pete Hegseth, who has staked his tenure on aggressive AI adoption, has little incentive to be forthcoming.
Targeting errors aren't new, but the introduction of generative AI into the targeting chain is. This is technology that still hallucinates facts, misreads images, and stumbles over reasoning in low-stakes commercial settings. Deploying it in warfare, where the consequences of a wrong answer are measured in bodies, represents a leap that no one, military or otherwise, has rigorously tested.
The consumer backlash has complicated the victory lap. Anthropic's Claude app shot to the top of the App Store. A grassroots boycott campaign urged users to drop ChatGPT over OpenAI's Pentagon deal. On X, Altman faced a barrage of pointed questions: If OpenAI's contract permits all lawful uses, how can it also prohibit mass surveillance and autonomous weapons, which have no explicit legal ban? If OpenAI secured the same red lines Anthropic wanted, why couldn't the Pentagon accept those terms from Anthropic? The contradictions matter beyond the discourse.
These companies are in a ferocious competition for paying users, enterprise clients, and engineering talent. Neither is profitable. Both are burning billions and have raised tens of billions more in recent weeks to stay in the race. The Pentagon contracts are worth around $200 million apiece, which is not the biggest check either company will cash this year, but suddenly the biggest threat to both of their businesses.
For Anthropic, a supply-chain risk designation reaches far beyond the Pentagon. Any company that does business with the federal government, and that includes Anthropic's biggest backers Amazon $AMZN and Google $GOOGL, may need to prove they don't use Claude. That's a question that could ripple through enterprise sales, cloud partnerships, and investment decisions well beyond defense.
For OpenAI, the calculus of a classified-use agreement is one thing as a line item in a contract negotiation. It's another when bombs are actively falling and the questions about guardrails, targeting errors, and dead children don't have clear answers. The perception that your chatbot helps pick bombing targets is not a brand problem that a few replies on social media can solve.