Logo

A ChatGPT boycott helps send Anthropic’s Claude soaring

Anthropic drew red lines, OpenAI signed anyway, and the internet voted fast with downloads as Claude topped the App Store — then briefly crashed

Photo illustration by Michael M. Santiago/Getty Images

For a technology that sells itself as frictionless, switching chatbots has suddenly become a full-contact sport. Over the weekend, “Cancel ChatGPT” and “DeleteGPT” and “QuitGPT” posts started ricocheting around social media — and by Monday, Anthropic’s Claude had climbed to No. 1 on Apple $AAPL’s U.S. App Store free-download chart. 

But then on Monday morning, Claude briefly went down after what the company described as an “unprecedented” surge in demand — the kind of phrasing that doubles as both apology and a flex. The app that won the internet briefly broke the internet. Downdetector lit up. Users refreshed. And somewhere in San Francisco, someone muttered the most Silicon Valley sentence imaginable: "We’re trending; add servers.”

Service was restored within hours, but not before the irony crystallized. The app benefiting from a mass migration had to absorb the mass.

The spark for all the chaos and upheaval was Washington. 

Anthropic, after months of negotiations with the Pentagon, says it hit an impasse because it wanted two explicit carve-outs from any “lawful use” deal: no “mass domestic surveillance of Americans” and “fully autonomous weapons.” On weapons, Anthropic says today’s frontier models “are simply not reliable enough to power fully autonomous weapons” — and says that mass domestic surveillance would violate “fundamental rights.” It was willing to walk away from “several hundred million dollars in revenue” rather than sign.

Hours after Anthropic was sidelined, OpenAI went the other direction. It agreed to deploy advanced AI systems in classified environments. OpenAI insists it shares Anthropic’s two taboos and adds a third: “high-stakes automated decisions.” The company claims it built a sturdier enforcement machine than “usage policies” — a “cloud-only deployment” that keeps OpenAI’s safety stack running, with “cleared OpenAI personnel in the loop.”

But the Pentagon didn’t really budge; OpenAI’s deal is much softer than what Anthropic pushed for — with the tell being that “any lawful use” phrase. Instead of a slow, lawyerly comms rollout, CEO Sam Altman made an announcement on X $TWTR and then did an AMA, admitting the deal was “rushed” and conceding that “the optics don’t look good.”

The result was a weekend referendum conducted in taps and swipes; the boycott’s organizers claim more than 1.5 million people have “taken action” against OpenAI. In some corners, the energy centered on defense contracts and where AI companies should draw lines. In others, it was more personal: a backlash aimed squarely at OpenAI and its leadership, amplified in the shorthand and sarcasm native to social media.

And plenty of people have performed their outrage the modern way: by downloading a competing app.

Anthropic has reported record signups, with free active users up more than 60% since the start of the year and paid subscriptions more than doubling. But there’s scale, and then there’s scale. OpenAI says ChatGPT has more than 900 million weekly active users. A boycott, even a loud one, can’t entirely erase that advantage.

Still, OpenAI gets heat for a defense deal; Anthropic gets praise and benefits from the switch-over; then Claude’s infrastructure has to absorb a crowd whose hobby is asking for more tokens than physics can possibly provide. Morality plays can drive distribution, but distribution still has to clear the uptime bar.

📬 Sign up for the Daily Brief

Our free, fast and fun briefing on the global economy, delivered every weekday morning.