Advertisement
X

Claude Source Code Leaked Again: Anthropic Says No Customer Data Compromised

Anthropic confirms a second major data slip after exposing Claude Code’s internal architecture via npm; explore the leaked ‘KAIROS’ agent and ‘Mythos’ model details

SSL Store
Claude Source Code Leaked Again SSL Store
Summary
  • Anthropic leaked 512,000 lines of Claude Code source code via a 60MB npm source map file

  • Human error in the Bun bundler packaging process exposed 2,200 internal files and API logic

  • The leak revealed unreleased features like KAIROS (a 24/7 daemon) and Buddy (a terminal pet)

Advertisement

AI start-up Anthropic accidentally exposed the internal source code of its Claude Code AI assistant on Tuesday, triggering fresh scrutiny of the start-up’s release practices and operational security, Bloomberg reported.

The incident was reportedly caused by a packaging error, not a cyberattack and stressed that no customer data or credentials were involved.

“This was a release packaging issue caused by human error, not a security breach,” an Anthropic spokesperson reportedly said, adding that the company is rolling out measures to prevent a repeat.

The leak reportedly surfaced when a security researcher found that Claude Code version 2.1.88 included a source map file in its npm package. Such files are meant for debugging, but they can allow outsiders to reconstruct the original source code.

The Code Leak

In this case, the file effectively made it possible to inspect large parts of the tool’s internal architecture and logic. Developers quickly began downloading and analyzing the exposed material, looking for clues about how Anthropic is building and evolving the product.

Advertisement

According to reports, the release exposed more than 500,000 lines of code across nearly 2,000 internal files. The material reportedly revealed internal APIs, telemetry and analytics systems, some encryption-related logic, and communication paths between different components.

Developers also found references to possible future features, including a Tamagotchi-like assistant that reacts while users code and a background agent called “KAIROS.” Internal comments in the code also offered a rare glimpse into engineering debates at Anthropic.

Anthropic’s Second Slip Up

The incident is especially embarrassing because it is not the first time the company has faced a similar lapse.

Just days earlier, Fortune reported that Anthropic had accidentally made thousands of files publicly available, including a draft blog post about a powerful upcoming model known internally as “Mythos” and “Capybara.” A similar source map-related issue had also been reported in early 2025.

Advertisement

The repeated exposure has raised concerns about Anthropic’s security controls at a time when the company is positioning itself as a leader in AI safety and enterprise-grade reliability.

For developers and security researchers, the leak is less about a breach than about what it reveals: a company racing to ship advanced AI tools while struggling to keep sensitive internal information out of public view.