Logo

Anthropic is going after ChatGPT in a buzzy new ad campaign

Claude’s Super Bowl swing turns therapy, homework, business, and fitness into ad breaks — a pointed jab as OpenAI tests ads in ChatGPT

Photo via Anthropic

A man sits in a therapist’s office, trying — earnestly — to figure out how to talk to his mom. His “therapist” listens, nods, offers something that almost sounds helpful, and then, without changing expression, abruptly pivots into a pitch for “Golden Encounters,” a fictional dating site for younger men seeking older women.

If you felt your soul leave your body for a second, congratulations: You understood the assignment.

That lurch is the centerpiece of Anthropic’s new Claude ad campaign, which spends its time on a single, pointed fear: Once the chat window becomes a business model, the chatbot’s loyalties start to blur. The campaign, called “A Time and a Place,” was created with Mother and directed by Jeff Low, and it’s built to scale from internet buzz to a mass audience. 

Each spot starts with a normal modern ask — help me write, help me decide, help me get in shape, help me be a better person — and then yanks the wheel into an absurd product plug, delivered in the exact cadence people now associate with chatbot help. All of the ad spots end with a line aimed straight at OpenAI, which recently announced an ad tier: “Ads are coming to AI. But not to Claude.” 

Anthropic is making its case on a stage that doesn’t do subtle: Super Bowl LX. A 30-second ad will reportedly run during the Super Bowl, with a longer, 60-second cut in the pregame — an expensive way (see: around $8 million for a game-day ad) to introduce Claude via the biggest, loudest megaphone in American advertising to people who don’t spend their days arguing about large language models.

The other ads widen Anthropic’s point into different everyday corners — and Anthropic gives them names that don’t exactly whisper subtlety. The spots are labeled like sins in a morality play: “Treachery,” “Deception,” “Violation,” and “Betrayal.” The joke is that the AI isn’t wrong to monetize; it’s just socially unbearable when the monetization barges into what feels like a private moment.

In “Treachery,” a student asks a teacher for reassurance about an essay — and gets it, right up until the “teacher” starts pushing jewelry discounts mid-feedback. In “Deception,” a nervous female entrepreneur pitches a business idea and receives warm, mentor-y guidance — until the AI swerves into a payday-loan plug (“Because girlbosses need SHE-E-O money quick”). In “Violation,” a short, scrawny guy is doing a pull-up —  a wink at OpenAI’s “Pull-Up with ChatGPT” ad from last year — and asks a buff trainer, “Can I get a six-pack quickly?” The trainer starts out like a pocket life coach, and then tries to sell him “Step Boost Max,” fictional insoles “that add one vertical inch of height and help short kings stand tall.”

That’s Anthropic taking OpenAI’s same consumer-AI premise and flipping it into a cautionary tale: Imagine asking for help and getting sold to mid-sentence. Same cadence. Same gentle authority. Same little turn where the conversation stops being about the person asking the question and starts being about the invisible person paying for the interruption. 

Anthropic paired the ads with a public pledge. “There are many good places for advertising,” the company wrote in a Wednesday blog post. “A conversation with Claude is not one of them.” Claude, Anthropic says, will remain ad-free: no sponsored links beside chats, no third-party product placements, no advertisers nudging responses.

OpenAI, for its part, has stopped pretending the ad question is theoretical. In January, it said it plans to start testing ads “in the coming weeks” in the U.S. for logged-in adults on the free tier and its $8-a-month Go tier. The initial format puts ads at the bottom of answers when there’s a “relevant sponsored product or service,” clearly labeled and separated from the organic response, with options to dismiss the ad and see why it appeared. 

OpenAI’s argument is that ads can expand access without corrupting the core product — and that answers won’t be influenced by advertisers. Anthropic’s counterargument is simpler and, frankly, stickier: Ads change incentives, and incentives change behavior, especially in a product that people use for work, advice, and sometimes the sort of confessions they probably shouldn’t be typing into any app with a login screen.

On paper, this looks like the familiar internet bargain: You either pay with money, or you pay with attention. In a chat window, the attention tax feels different. A feed can wear an ad like a cheap suit. A chatbot speaks in the first person, remembers context, and invites you to hand over the messy stuff — work drafts, health worries, the delicate interpersonal scripts you’re too embarrassed to rehearse with a friend.

Anthropic is arguing that ads don’t merely sit alongside a conversation; they tug on the direction of it. Someone shows up asking for help sleeping or focusing, and the revenue engine starts scanning for a product-shaped exit. The risk isn’t a cartoon villain whispering “buy.” The risk is the quieter drift toward what pays — the suggestion that keeps you engaged, the recommendation that happens to have a sponsor, the subtle pressure to treat your train of thought as inventory. Ads beside content are the price of the modern internet. Ads inside something that talks back — and remembers context, and nudges your choices, and often gets used for work or personal problems — land differently. 

OpenAI CEO Sam Altman didn’t entirely hate the ads — “they are funny, and I laughed,” he wrote in a post on X $TWTR — but called the premise “so clearly dishonest” and claimed that OpenAI “would obviously never run ads in the way Anthropic depicts them” because “we know our users would reject that.” 

He added, “I guess it’s on brand for Anthropic doublespeak to use a deceptive ad to critique theoretical deceptive ads that aren’t real, but a Super Bowl ad is not where I would expect it.” Altman then tried to reframe the fight as access and scale, touting ChatGPT’s free reach and taking a big swing at Anthropic (“more Texans use ChatGPT for free than total people use Claude in the U.S.”) and essentially calling the rival company a pricey, gatekeeping alternative that “wants to control what people do with AI.”

Still, in enterprise circles, Claude has been steadily muscling into workflows where “model quality” is a procurement decision, not a vibe. But in consumer land, ChatGPT is still the Kleenex of chatbots: the name people use when they mean the category. Anthropic’s ads aren’t really trying to win a feature comparison. They’re trying to win a reflex. Anthropic is making a promise, yes — and also a way to give mainstream users a mental sorting mechanism at the exact moment AI is becoming normal enough to attract the internet’s oldest business model.

If the future of chat is sponsored, Anthropic is pitching itself as the place you go when you’d rather not be pitched while you’re asking for help.

OpenAI’s bet is that people will tolerate ads if the product stays powerful and the price stays low. Anthropic’s bet is that, in a world already exhausted by the ad economy, “ad-free” can be a feature people choose on purpose — and pay for, or bring into the office on an expense account. Either way, the AI wars are growing up: less demo magic, more business model. And apparently, more dodgy therapy.

📬 Sign up for the Daily Brief

Our free, fast and fun briefing on the global economy, delivered every weekday morning.