Beyond the Source Map: A Dual Case Study in Supply-Chain Failures and Safeguards

Introduction

This white paper examines the technical root causes of couple latest supply chain incidents,

quantifies the blast radius, and provides prescriptive controls for engineering and DevSecOps

teams responsible for securing software supply chains and release pipelines.

1. Incident Overview

1.1 Anthropic Claude Code Source Map Leak

A JavaScript source map (.map file) is a development artifact that maps compiled/minified

output back to its original human-readable source. It is strictly an internal debugging tool and

has no legitimate place in a published npm package. This was published in public npm registry

due to oversight.

1.2 Axios Supply Chain Attack

Independently, a threat actor published malicious versions of the axios HTTP client library to

npm during a three-hour window. The trojanized versions contained a cross-platform Remote

Access Trojan distributed through a dependency named plain-crypto-js.

2. Root Cause Analysis

2.1 Source Map Leak: The Bun Bug

Claude Code is built on Bun — a JavaScript runtime Anthropic acquired in late 2025. A known

bug in Bun (oven-sh/bun#28001, filed March 11, 2026) causes source maps to be generated

and served even when production mode is active. Bun's own documentation states source

maps should be disabled in production builds. The bug was open and unpatched at the time of

the release.

Note: This is a defence-in-depth failure: any single control below would have prevented

publication.

2.2 Axios Supply Chain Attack: Malicious Package Publication

The axios attack followed the classic dependency confusion / malicious publish pattern. The

threat actor obtained or created credentials sufficient to publish new versions of a legitimate,

widely-used package. The payload was embedded in a transitive dependency (plain-crypto-js)

to reduce scrutiny during manual review.

This attack is categorically distinct from the Anthropic leak — it required active adversary action

rather than human error — but it shares the same delivery vector: the public npm registry with

insufficient publisher verification. This is a very common and rampant supply chain attack

pattern. In fact, this is one of Top 10 OWASP (A03)

3. What Was Exposed

3.1 Proprietary Engineering Details• Self-healing memory architecture and context window management logic

• Tool system internals: file read, bash execution, multi-agent orchestration

• Bidirectional IDE communication layer

• Query engine and LLM API orchestration patterns

• 44 feature flags — 20 covering unshipped functionality

3.2 Strategic Product Roadmap

KAIROS: Persistent background agent mode with autonomous task execution and

push notifications KAIROS

autoDream: Nightly memory consolidation and context distillation while idle

autoDream

Undercover Mode: System prompt directing Claude Code to conceal AI authorship

when contributing to public open-source repositories Undercover Mode

Buddy: Tamagotchi-style companion feature with a coded rollout window of April

1–7 Buddy/companion.ts

3.3 Anti-Competitive Mechanisms

The leak exposed two anti-distillation mechanisms Anthropic had not disclosed publicly:

• Fake tool injection (ANTI_DISTILLATION_CC flag): Decoy tool definitions injected into

API requests to poison training data collected by competitors recording API traffic

• Server-side connector-text summarization with cryptographic signatures to prevent

verbatim extraction of assistant reasoning chains

3.4 Internal Model Codenames and Performance Metrics

• Capybara: Claude 4.6 variant (also referenced as Mythos in a separate prior leak)

• Fennec: Opus 4.6

• Numbat: Unreleased model still in testing

• Capybara v8 false claims rate: 29–30% (regression from 16.7% in v4) — a benchmark

competitors can now exploit

Note: Claude Model weights, customer data, API keys, credentials, or authentication tokens were not

exposed. Anthropic also confirmed no sensitive customer data was involved. The leak was source code

and build artifacts only

4. Prevention: Engineering Controls

A two-step release pipeline is proposed as described below

4.1 Two-Stage Release Pipeline: Private Registry GateThe most architecturally sound mitigation is a two-stage release pipeline that physically

separates internal builds from public distribution. Even if a packaging bug ships a .map file, it

never reaches the public registry, it is caught and blocked in the internal stage.

For a commercially critical package with significant IP value, this pattern should be a standard

release governance. The marginal operational cost is low; the blast radius reduction is total.

4.1.1 Stage 1: Internal Publish Pipeline

Every commit to main triggers a build that publishes to a private registry (GitHub, npn, docker

etc). This stage runs all automated quality and security gates.

4.1.2 Automated Sanitization Gates (Stage 1)

The following checks run as blocking CI steps against the private registry build. Any failure

aborts promotion.

• .map file detection: fail if any *.map file is present in the packed tarball

• Package size assertion: fail if tarball exceeds a defined size threshold (e.g., 10 MB)

• Allowlist check: assert only whitelisted paths (dist/, bin/, README.md) are included

• Secret scanning

• Dependent packages vulnerability scanning

• Code vulnerability scanning

• Ensure binaries are code signed with a valid digital certificate

• Dependency integrity: verify all dependency hashes match the committed lockfile

• SBOM generation: produce a Software Bill of Materials for the release candidate

4.1.3 Stage 2: Human Approval and Public Promotion

After Stage 1 passes, a promotion request is opened, for example, a GitHub environment

protection rule, a JIRA release ticket, or a dedicated approval workflow. A designated release

engineer reviews and approves before Stage 2 executes.


4.1.4 Pipeline Process Summary

Note: Why would this have prevented the issues?

The .map file would have been caught by the automated gate in Stage 1 and never reached public

registry npmjs.com. Even if the gate had been skipped, the human reviewer in Stage 2 would have seen

a 60 MB anomaly in the SBOM and halted promotion. Two independent failure modes would both need to

be bypassed simultaneously for the leak to occur. See section 4.2 for more reasons.

4.2 Harden Package Contents (Defense in Depth)

The two-stage pipeline is the primary control. These are complementary hardening measures

that provide additional layers if the pipeline is bypassed.

4.2.1 Whitelist-Only Package Contents

An explicit files whitelist in package.json makes accidental inclusion of .map files structurally

impossible, independent of the pipeline like gitignore, npmignore. Any change to this list would

have to go through human review and approval process.

4.3 Dependency Security: Defending Against Supply Chain Attacks

4.3.1 Lock and Verify Dependencies

Commit lockfiles (package-lock.json, yarn.lock, bun.lockb) to version control. Use npm ci

instead of npm install in all CI/CD environments — ci respects the lockfile exactly and does not

resolve new versions.

4.3.2 Subresource Integrity and Hash Pinning

Use tools like Socket.dev, Socket Security, or Snyk to monitor for dependency anomalies: new

maintainers, sudden version bumps, new transitive dependencies. These are the behavioral

signals of a supply chain attack.

4.4 Secrets and Credentials Management

• Never store secrets, API keys, or credentials in source code or build artifacts, use

environment variables or secrets managers/vaults

• Implement automated secret scanning in CI to catch any credentials before they reach

version control or published artifacts

• Have secret rotation policy as per NIST standards and rotate credentials proactively

especially after a suspected breach, even if no credentials are confirmed in the leaked

material.

4.5 Incident Response Preparation

• Maintain a private mirror of all published packages to enable rapid forced-version pulls

without losing rollback capability

• Pre-draft DMCA takedown templates for major code hosting platforms to reduce

response time in the event of unauthorized redistribution

• Define package size and content anomaly thresholds in monitoring — alert on sudden

publish size increases

• Establish a clear escalation path: who authorizes an emergency unpublish, who issues

external communication, who coordinates with registries

5. Conclusion

The Claude Code incident illustrates how a single unpatched toolchain bug, compounded by the

absence of a private registry staging layer and human release gates, can result in catastrophic

IP exposure. The concurrent axios supply chain attack demonstrates that the npm ecosystem

remains a high-value target and that package integrity cannot be assumed without active

verification.

A two-stage release pipeline — private registry for automated gates, human approval before

public promotion, following defense in depth like whitelisting of private files, would have

contained the blast radius entirely. This is not a novel architectural pattern; it is standard release

governance for any package with significant commercial or IP value. The Anthropic incident

reveals how easily this governance is omitted when velocity is prioritized over process rigor.

For engineering and DevSecOps teams: treat your publish pipeline as a security boundary with

two distinct zones. Nothing crosses from internal to public without automated verification and

human sign-off. The operational cost is measured in hours of setup; the cost of omitting it, as

demonstrated here, is measured in irreversible competitive exposure.