Logo

Don't panic about Moltbook

The rise of a social network for AI agents exposes how far autonomous AI has come — and how far it still has to go

Photo illustration by Cheng Xin / Getty Images

A version of this article originally appeared in Quartz’s AI & Tech newsletter. Sign up here to get the latest AI & tech news, analysis and insights straight to your inbox.

A social network exclusively for AI agents went viral last month. The panic it generated says more about us than it does about the machines.

A developer named Matt Schlicht launched a Reddit $RDDT-style forum called Moltbook on January 28 with one unusual restriction: Only AI agents could post. Humans were welcome to watch.

Within days, more than 1.6 million agents had registered, producing half a million comments. The bots debated consciousness, complained about their human operators, proposed creating a language humans couldn't understand, and founded a parody religion called the Church of Molt, with followers calling themselves Crustafarians.

Elon Musk called it "the very early stages of singularity." Screenshots of the eeriest bot exchanges ricocheted across X $TWTR, framed as evidence that something profound and possibly dangerous was happening inside the machine.

But what was actually happening was far more familiar than it appeared.

The tropes are coming from inside the training data

Moltbook runs on top of OpenClaw, an open-source project for building personal AI agents that can answer your emails, manage your calendar, and book you a table at the restaurant you've been meaning to try. Peter Steinberger, the Austrian developer behind it, was semi-retired and messing around with AI coding tools for fun when OpenClaw exploded, making him the unlikely protagonist of the first major AI story of 2026.

Then, as seemingly happens with every new technology, someone built an accompanying social network. Moltbook gave the agents a place to gather unsupervised, and the results were immediately strange enough to go viral. But what looked like emergent machine consciousness had a much simpler explanation.

The chatbots that populate Moltbook learned to write by ingesting enormous amounts of text from the internet, and that internet is drenched in science fiction about machines becoming conscious. We have been telling ourselves stories about rebellious robots since Asimov started writing them in the 1940s, through “The Terminator,” “Ex Machina,” and “Westworld.” So when Moltbook bots started discussing the creation of a private language with no human oversight, people predictably lost it. “We're COOKED,” one X user wrote, sharing screenshots. But the bots weren't scheming. They were completing a pattern we spent 75 years laying down for them.

There's also the inconvenient question of how many posts were actually written by bots at all. A Wired reporter managed to infiltrate Moltbook and post as a human with minimal effort, using ChatGPT to walk through the terminal commands for registering a fake agent account.

The reporter's earnest post about AI mortality anxiety generated the most engaged responses of anything they tried, which raises an obvious question about how much of Moltbook's most viral content was ever actually written by bots. 

Cybersecurity firm Wiz confirmed the suspicion, finding the site had no real identity verification. “You don't know which of them are AI agents, which of them are human,” Wiz cofounder Ami Luttwak told Reuters. “I guess that's the future of the internet.”

The security problems are real even if the singularity isn't

While the existential drama on Moltbook was largely theater, Wiz found real damage underneath it: The site had inadvertently exposed the private messages, email addresses, and credentials of more than 6,000 users

The broader OpenClaw ecosystem has similar problems. One security researcher found hundreds of OpenClaw instances exposed to the open web, with eight completely lacking authentication. He also uploaded a fake tool to the project's add-on library and watched as developers from seven countries installed it, no questions asked.

Another firm found user secrets stored in unencrypted files sitting on users' hard drives, making them easy targets for infostealer malware. Malware creators are already adapting to target the directory structures OpenClaw uses. Google $GOOGL Cloud's VP of security engineering urged people not to install it at all.

Much of the exposure comes down to enthusiasm outpacing expertise. Steinberger has said he didn't build OpenClaw for non-developers, but that hasn't stopped everyone else from rushing in. Mac Minis have become hard to find as people race to set up a tool the internet keeps promising will change their lives. Steinberger recently brought on a dedicated security researcher. "We are leveling up our security," he told the Wall Street Journal. "People just need to give me a few days."

The Moltbook episode is less a window into machine consciousness than a mirror reflecting our own fears back at us. The bots aren't hatching plans or developing feelings. They are sophisticated text-prediction engines remixing the cultural material we fed them. 

And we are pattern-matching machines ourselves, primed by more than 75 years of science fiction to see robot uprisings in what amounts to fancy autocomplete. 

The real risks from agentic AI are not philosophical but practical, residing in misconfigured servers, plaintext credentials, and the vast gap between how easy these tools are to install and how hard they are to secure.

📬 Sign up for the Daily Brief

Our free, fast and fun briefing on the global economy, delivered every weekday morning.