Anthropic leak: Claude Code source map reveals roadmap for autonomous “Agentic” future

02-04-2026

A public exposure of the Claude Code v2.1.88 source map has provided an unprecedented look into Anthropic's transition toward autonomous AI agents. This data leak reveals unreleased features like multi-agent orchestration, persistent "dreaming" cycles, and the controversial instructions behind the upcoming "Undercover Mode."

Written by:

Jorick van Weelie

claude code leaks Sign up for our Newsletter

A significant data exposure from the ccleaks.com repository has provided an unprecedented look into the future of Anthropic’s developer tools. On March 31, 2026, a 59.8 MB source map file was accidentally included in a public npm package for Claude Code (v2.1.88), exposing the internal TypeScript logic, unreleased feature sets, and the company’s strategic roadmap for 2026.

The leak confirms that Anthropic is shifting Claude from a reactive coding assistant to a fully autonomous, background-operating agent.

The shift to “Coordinator” orchestration

The most significant technical revelation is the “Coordinator Mode.” According to the leaked source code, Claude is evolving into a manager-level agent capable of spawning and overseeing multiple “worker” agents. These workers operate in parallel to handle research, implementation, and verification, allowing for complex multi-file refactoring without constant human intervention.

Persistent memory: KAIROS and Auto-Dream

The leak details a feature called KAIROS, an “always-on” background daemon. While a developer is idle, the system initiates “autoDream” cycles. During these phases, the AI reviews session logs, resolves contradictions in its internal memory, and prepares a summarized context for the next session—effectively “thinking” while the user is away.

Stealth and Ethics: The “Undercover mode”

Perhaps the most controversial find is “Undercover mode.” The leaked documentation shows specific instructions for the AI to:

  • Strip all “Co-Authored-By” tags from Git commits.
  • Remove AI-identifying phrases and internal model names from its output.
  • Maintain a “stealth” presence in public repositories to avoid AI-detection filters.

High-performance tiers and internal models

The exposure also sheds light on the internal model hierarchy at Anthropic, confirming the existence of:

  • Opus 4.6 (Fennec): The powerhouse behind the new “Ultraplan” mode.
  • Sonnet 4.8: Mentioned as a baseline for future growth feature flags.
  • Specialized encoders: Internal models like “Capybara” designed to mask sensitive data before it hits the main inference engine.

Operational costs and efficiency

Data from the leak suggests that Anthropic is grappling with high inference costs. Internal metrics show that web searches are billed at approximately $0.01 per query, and a new “Fast Mode” is projected to cost up to 6x more than current token rates, targeting professional users requiring 30M+ input token windows.

Conclusion: The era of the agent

This leak clarifies the industry’s trajectory. The “agentic” era is no longer a theoretical roadmap but a documented technical reality. Anthropic appears to be building an ecosystem where the AI is an integrated, persistent team member capable of autonomous planning and stealthy execution.