Logo

Anthropic accidentally leaked its own source code for Claude Code

About 500,000 lines of code across roughly 1,900 files were exposed after a debug file was bundled into a public package update

NurPhoto / Getty Images

Anthropic accidentally exposed internal source code for its Claude Code AI coding tool after a debug file was mistakenly included in a public npm package update, Axios reported. The leak exposed roughly 500,000 lines of code across approximately 1,900 files, according to Fortune.

"No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in a statement. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again."

An X $TWTR post linking to the exposed code had accumulated more than 21 million views within hours of being shared early Tuesday morning, CNBC reported.

The incident is Anthropic's second significant data exposure in under a week. Earlier this month, Fortune reported that close to 3,000 files had been left in a publicly accessible data store on Anthropic's website, including a draft blog post describing a powerful upcoming model known internally as both "Mythos" and "Capybara." Anthropic attributed that earlier exposure to a configuration error in an external content management tool.

The code that leaked in the latest incident belongs to what Fortune describes as Claude Code's "agentic harness" — the software layer that wraps the underlying AI model and governs how it interacts with other tools. A cybersecurity professional who reviewed the leak for Fortune said the exposure could allow technically sophisticated actors to extract additional internal information from the codebase beyond the source code itself.

Roy Paz, a senior AI security researcher at LayerX Security, told Fortune the mistake appeared to stem from someone bypassing normal release procedures — uploading the full original source rather than only the compiled version intended for distribution. Anthropic said normal release safeguards were not bypassed. Paz added that the leaked code could reveal non-public details about internal APIs and system architecture, which in turn could inform attempts to circumvent existing safety guardrails.

The code also contained further evidence of the forthcoming Capybara model, according to Paz, who said it appeared the company may release both a faster and a slower version based on what the code suggested about the model's context window.

The leak hands competitors a detailed look at how Claude Code works behind the scenes. The tool is among Anthropic's most commercially significant products. Claude Code's annualized revenue had reached more than $2.5 billion as of February, according to CNBC, drawing competing products from OpenAI, Google $GOOGL, and xAI.

The latest breach is not the first time Claude Code's internals have been inadvertently exposed. According to Fortune, an early version of the tool accidentally leaked similar details in February 2025, revealing how it connected to Anthropic's internal systems. Anthropic subsequently removed the software and took the public code down.

Anthropic was founded in 2021 by former OpenAI executives and researchers, and is best known for its Claude family of AI models.

📬 Sign up for the Daily Brief

Our free, fast and fun briefing on the global economy, delivered every weekday morning.