Logo

The year of Anthropic

Anthropic's AI products are destabilizing software, reshaping how engineers work, and putting it at the center of a full-blown Pentagon standoff

Anthropic CEO Dario Amodei (Krisztian Bocsi/Bloomberg via Getty Images)

Anthropic spent years being the responsible AI company. In 2026, it became the most disruptive one.

The same tools it developed under strict safety guidelines are now destabilizing enterprise software, reshaping how engineers work, and putting it at the center of a full-blown Pentagon standoff.

The AI industry's main sport for the last few years has been benchmarks. Who scores highest, whose context window is longest, whose demo lands hardest at a conference. Anthropic played that game, too. 

It also made a sustained push into a specific market: software engineers, highly paid and spending their days doing exactly the kind of work AI was getting good at. That focus is now paying off in a way that's changed the conversation about who's actually winning this race.

Claude Code had a ChatGPT moment, but for engineers. Engineers started shipping software at speeds that felt almost physically impossible. Anthropic executives have been vocal about what that means for the profession. At Davos, CEO Dario Amodei predicted AI could handle most or all of software engineering work, end to end, within six to 12 months. Claude Code's creator declared the job title itself might soon disappear.

Anthropic's own hiring tells a more complicated story. The company's open software engineering roles have climbed 170% since January 2025, according to one tracker, with the curve accelerating. 

What's harder to dismiss is the outside evidence about the vibe shift on vibe coding. Paul Ford $F, a technologist and longtime software industry observer, wrote in The New York Times that something changed in November. 

Before then, AI coding tools were useful but halting. After, he was finishing projects that had sat in folders for a decade, on his subway commute. His friends were noticing the same thing. The software engineer corner of the internet lit up with similar accounts. "I am less valuable than I used to be," Ford wrote.

When Anthropic published a blog post late last month claiming Claude Code could translate legacy COBOL into modern languages, IBM lost roughly $40 billion in market cap in a single session. The broader sell-off wiped more than a trillion dollars from Big Tech valuations. Legal software stocks dropped. Design stocks dropped. Analysts pointed out that mainframe modernization involves far more than converting code, and IBM's technical moat runs deep. Nvidia $NVDA CEO Jensen Huang called the panic “illogical.” Franklin Templeton's CEO told the Financial Times it looked like a genuine long-term threat to enterprise software's business model.

For Anthropic at least, the disruption was good for business. For Anthropic at least, the disruption was good for business. The company was on pace to generate $9 billion in revenue at the end of 2025. By early March that figure had reportedly almost doubled to $20 billion. The share of U.S. companies paying for its tools hit 20% in January, up from roughly 4% a year earlier. OpenAI is still larger, but its share of enterprise spending fell from 50% to 27% over the same period while Anthropic's climbed to 40%.

Then came demands from the Pentagon.

The Trump administration ordered federal agencies to stop using Anthropic's technology and labeled the company a supply-chain risk, a designation normally reserved for Chinese firms under espionage suspicion. The trigger was a dispute over guardrails, with Anthropic refusing to give blanket permission for its tools in autonomous weapons systems or mass surveillance.

Within hours, OpenAI announced a new Pentagon deal. CEO Sam Altman said publicly that it included the same prohibitions on autonomous weapons and mass surveillance that Anthropic had sought. Not everyone believes that, and the fallout has been swift in both directions.

Anthropic's app shot to the top of the App Store. A boycott campaign targeted OpenAI. It turns out that sticking to your principles, or at least being seen to, is its own kind of marketing.

The supply-chain risk designation is a real threat though, one that could ripple through Anthropic's key relationships with Amazon $AMZN and Google $GOOGL, both significant federal contractors and two of the company's biggest backers.  And the company is not yet profitable.

But while the Pentagon drama plays out, engineers are still on their subway commutes, finishing in an hour what used to take a week. That's the thing that started all this. And it’s still only March.

📬 Sign up for the Daily Brief

Our free, fast and fun briefing on the global economy, delivered every weekday morning.