Claude Code is leaking API keys into public package registries

sensitive data leak

A recent study by cybersecurity firm Lakera reveals that AI coding assistants like Claude Code are inadvertently hoarding and leaking sensitive API keys during public package releases. While these tools accelerate the software development lifecycle, they also introduce hidden vulnerabilities into the automated software supply chain.

Claude Code caches approved terminal commands in a hidden local file. When a developer selects an “allow always” option to bypass repetitive prompts, any credentials passed within that command become permanently stored on the local machine. If the developer publishes the project to a public registry without explicitly ignoring this hidden directory, those stored API keys ship globally alongside the source code.

Industry experts emphasize the novelty and scale of this risk as AI agents move deeply into developer workflows. This means AI tool companies must adapt their tools to this new reality. At the same time, developers must take measures to avoid exposing their software libraries to the threats posed by AI coding tools.

“AI tooling is evolving at breakneck speed, and in many ways, this is the most software we’ve ever seen created and deployed without mature secure defaults both in the generated code itself and in the surrounding developer environment,” Steve Guiguere, Principal AI Security Advocate at Check Point Software, told TechTalks.

How Claude Code leaks your sensitive data

Claude Code operates using a strict permission system for shell commands. When the assistant attempts to run a command it has not executed before, it presents the developer with authorization options. Selecting “allow always” writes the exact command string to a hidden file located at “.claude/settings.local.json” within the root of the project directory. 

Developers routinely execute authenticated API calls, run deployment scripts, or log into cloud services directly from the terminal. If an environment variable or API key is prepended to one of these commands, the AI agent logs it as a permanent allowlist entry. The agent is functioning exactly as designed, remembering state to reduce friction. But at the same time, it creates a static record of sensitive data.

The exposure occurs during the package publishing phase. Package managers like npm build distribution archives directly from the contents of the project directory. The “.claude/” folder acts similarly to a “.env” file, signaling that it contains personal, environment-specific data. However, it lacks the widespread ecosystem awareness that typically prevents environment files from shipping. Build tools exclude files via “.npmignore” or the “files” field in a “package.json,” but neither mechanism excludes the “.claude/” directory by default. 

To measure the impact, Lakera built a service monitoring the npm registry’s changes feed. Across a scan window of roughly 46,500 packages, the firm identified 428 packages containing a “.claude/settings.local.json” file. Of those, 33 files across 30 packages contained live credentials. Roughly one in 13 of the shipped settings files exposed sensitive data to the public.

Traditional automated safeguards frequently miss these exposures. Existing secret scanning tools like GitHub Advanced Security are highly effective at finding known credential patterns within source code and version control histories. “This case is different because the credentials are embedded inside an AI tool’s local settings file as part of approved shell command strings,” Guiguere said. The AI assistants have created entirely new locations where secrets quietly accumulate outside the view of established security workflows.

Claude code leaks api keys

The underlying vulnerability affects any ecosystem that packages files from a project directory. Build tools for Python source distributions (PyPI), RubyGems, and Maven all select files and publish archives based on directory contents, carrying the same exposure risk if the hidden “.claude/” directory is visible to the packaging process, according to Lakera. 

While widespread, automated exploitation of these specific files is not yet public, proactive research proves the capability exists. “If researchers can identify a repeatable way to discover exposed credentials in public registries, we have to assume adversaries can do the same, and likely will,” Guiguere warned. “In security, the right assumption is often that once a weakness is practical and economically interesting, it will eventually be operationalized.”

Countermeasures for developers and enterprises

Developers can immediately mitigate this risk by manually adding the “.claude/” directory to their “.npmignore” and “.gitignore” files. Furthermore, package managers offer preview mechanisms that allow developers to inspect an archive before it goes live. Running commands like “npm pack –dry-run” or using equivalent artifact inspection tools in other languages ensures that hidden AI state files are excluded from the final release.

For creators of AI tools, automatically generating or updating these ignore files during the tool’s initialization would act as a strong secure-by-default mechanism. Despite the logic of this approach, the industry is trending toward a cloud-style shared responsibility model, according to Guiguere.

“The platform will be securable, but not necessarily secure by default,” he said. “Providers will offer mechanisms and guidance, while developers and enterprises remain responsible for configuring and enforcing protections around those tools.”

This is a pattern we’ve seen in other security incidents, such as a recently reported vulnerability in Model Context Protocol (MCP) that exposed computers to remote code execution (RCE) attacks. The maintainers of the protocol and MCP tools said this was a feature and that developers are responsible for preventing such incidents. Striking the right balance between flexibility and security remains an unsolved problem in AI tools, and for the time being, developers must be super cautious.

But relying on individual developers to remember manual file updates scales poorly across large organizations. Enterprises require automated preventative guardrails built directly into their software delivery pipelines. Platform engineering teams should implement policy checks that automatically fail builds if “.claude/” or similar agent directories appear in publishable artifacts. 

The presence of local AI agents also introduces a new endpoint security dimension. Because tools like Claude Code run locally, they create risk at the developer workstation long before code ever reaches a registry. Enterprises need endpoint controls capable of auditing development directories where sensitive agent state accumulates, shifting the burden of security from individual developer hygiene to managed enterprise controls.

The future of agentic AI security

The integration of always-watching AI agents forces a fundamental shift in how the industry views command-line hygiene. Historically, passing an API key directly into a local terminal command via a tool like curl was relatively safe because local bash histories are rarely packaged and published. 

AI coding assistants disrupt that model. “If an AI agent is watching and recording operational context, developers need to stop thinking of the terminal as purely ephemeral,” Guiguere said. “AI coding assistants change that model because they observe, approve, remember, and sometimes persist commands as part of their operating model.”

To harden systems against inadvertent credential hoarding, engineers must recognize that an AI agent running on a desktop is an application runtime. It requires the same architectural discipline expected of cloud infrastructure. Future security models will require sandboxing agents, mounting only the specific directories they need to function, and strictly enforcing the principle of least privilege. 

Where secrets are required for complex workflows, they must be retrieved dynamically from controlled secret stores or secret managers using scoped, short-lived access, rather than relying on hard-coded credentials embedded in repeatable command-line approvals.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.