Logo

Sam Altman's human verification company is now building for AI agents

World, the iris-scanning identity startup formerly known as Worldcoin, launched a toolkit that lets verified humans delegate their identity to AI agents

Photo by JUAN MABROMATA / AFP

The company built on the argument that humans need protection from bots is now helping bots borrow a person's humanity.

World, the iris-scanning identity startup formerly known as Worldcoin, just announced the launch of AgentKit, a toolkit that lets verified humans delegate their identity to AI agents. It built that identity system by offering cryptocurrency as an incentive to scan your iris, generating a unique proof that you are a real human. The pitch, from a company co-founded by Sam Altman, produced both 18 million sign-ups and regulatory scrutiny across three continents.

Agentic commerce, in which AI acts on your behalf to book reservations, compare prices, or complete purchases, is growing fast. McKinsey sees the market hitting somewhere between $3 and $5 trillion globally by 2030. Bain puts AI agents at potentially a quarter of all U.S. e-commerce in the same window. The problem is that platforms have not figured out how to let legitimate agent traffic in while keeping bad actors out, so many AI agents just get blocked.

AgentKit wants to thread that needle. A human verifies their World ID through the usual iris-scanning process. They can then register one or many AI agents under that ID. When one of their agents visits a compatible website, it can provide cryptographic proof that a real, unique person is behind it, without revealing who that person is.

That last part is the point. The technology is not just about letting agents in. It is about making sense of who, or what, is actually there. World points to Moltbook, the AI agent social network that briefly captivated the internet earlier this year, as evidence of what happens without it.

Within days of launching, 1.6 million agents had registered and nobody, including the platform, could reliably say which posts came from bots, which came from humans pretending to be bots, or how many distinct people were behind any of it. A micropayment can slow a bad actor down. Knowing that a thousand agents all trace back to one person is a different kind of signal entirely.

Meta $META, which acquired Moltbook last week, has historically shown little appetite for policing whether the content on its platforms comes from real people, and a social network full of bots generating endless AI slop with no human owner to point to is not obviously a problem they would rush to solve.

OpenAI, Altman's other day job, is reportedly taking the opposite approach. It is considering building a social network premised entirely on keeping bots out, potentially using biometric proof of personhood via World's iris-scanning orb or Apple $AAPL's Face ID.

But a social network is a different beast than an agent trying to book a restaurant or complete a purchase on your behalf. World is not trying to solve one platform's bot problem. It is trying to build the identity layer for the entire agentic web.

That bet comes with some baggage. World has been banned or investigated in at least ten countries over privacy and data concerns. Its early growth strategy leaned heavily on recruiting users in the Global South, offering Worldcoin tokens as an incentive to hand over biometric data.

Now, with regulators warming to crypto and the agentic web arriving faster than anyone planned, the company formerly known as Worldcoin is making a bigger bet. If AI agents really do become how we shop, book, browse and transact, someone will need to vouch for the humans behind them. World would like that to be its problem to solve.

📬 Sign up for the Daily Brief

Our free, fast and fun briefing on the global economy, delivered every weekday morning.