Home Blog

How GhostClaw malware targets the OpenClaw AI agent boom

GhostClaw
GhostClaw

Threat actors are exploiting the rapid adoption of AI agents by designing malware that targets the agent itself. A new malware campaign, known as GhostClaw or GhostLoader, targets AI-assisted workflows and GitHub repositories to deliver credential-stealing payloads.

First discovered by JFrog Security Research and later analyzed by Jamf Threat Labs, GhostClaw represents a new vector in software supply chain attacks. Instead of exclusively relying on human developers to download malicious packages, the operators build traps for AI agents like OpenClaw to trigger autonomously. Once executed, the malware establishes a persistent Remote Access Trojan (RAT), harvesting system credentials, browser data, developer tokens, and cryptocurrency wallets.

The campaign preys on the high-level system permissions developers grant to local AI agents. GhostClaw shows how the bot is becoming the primary attack surface and should be a wake-up call for development teams relying on these frameworks to automate coding tasks.

Why Meta’s V-JEPA 2.1 model is a massive step forward for real-world AI

Robot grasping object
Robot grasping object

This article is part of our coverage of the latest in AI research.

Researchers at Meta have released V-JEPA 2.1, the latest iteration of their video world model. Current artificial intelligence models often struggle to simultaneously capture the global dynamics of a video and the fine-grained, local spatial details necessary for precise physical interactions. V-JEPA 2.1 bridges this gap by introducing several innovations to its architecture, training recipe, and training data. 

Experiments show that V-JEPA 2.1 yields much better and faster results in robotic grasping, autonomous navigation of the physical world, predicting object interactions, and estimating 3D depth. These are the kinds of advances that can unlock new applications for AI in the physical world.

Multi-level AI prompt engineering: A new tool for scientific discovery

hybrid brain
hybrid brain

Modern AI, particularly large-scale language models (LLMs) and specialized reasoning architectures, has evolved beyond its role as a simple data analysis tool. It has become a new and unusual tool for scientific research, capable not only of processing massive data sets but also of simulating complex reasoning chains and virtual experiments. These chains – in physics, sequences of mathematically related computations – can be simulated in a unified digital environment. This capability enables a form of exploratory mathematical modeling, where AI can rapidly iterate through theoretical constructs, propose new relationships between fundamental principles, and generate testable hypotheses. Consequently, this method promises to significantly simplify and accelerate the creation and testing of new scientific concepts and fully-fledged testable theories built on AI computations. This fundamentally changes the epistemology of scientific discovery.

Why AI won’t kill SaaS

AI vs SaaS

Subscribe to continue reading

Become a paid subscriber to get access to the rest of this post and other exclusive content.

How C-JEPA is teaching AI the physics of the physical world

Causal AI
Causal AI

This article is part of our coverage of the latest in AI research.

Picture a baseball flying toward a swinging bat. A human observer can effortlessly predict that the ball will abruptly change direction and speed upon impact. We possess an intuitive grasp of physics and causality. For artificial intelligence, however, predicting this kind of interaction-dependent object dynamics is incredibly difficult. Learning the causal relations and interactions of objects remains a key challenge for AI systems.

baseball bat hitting ball

Enter C-JEPA, a new world model built on the Joint Embedding Predictive Architecture (JEPA). C-JEPA is designed to tackle these exact object dynamics. It embeds a causal inductive bias directly into the learning process, preventing the AI from adopting meaningless shortcuts. By doing so, it enables the model to reason accurately about object interactions and handle counterfactual scenarios.

In empirical tests, C-JEPA demonstrated significant improvements in visual question answering, particularly excelling at counterfactual reasoning when compared to other architectures (e.g., what happens if the bat misses the ball). Beyond reasoning, C-JEPA offers a highly efficient framework for building interaction-aware AI applications. It can execute complex predictive control tasks over eight times faster than standard models, using roughly 1% of the typical input features, which drastically reduces computational and memory overhead. 

While the model still needs to show its mettle in the messiness of the real world, it could be an important milestone for AI in the physical world, such as robotics and self-driving cars.

How Databricks’ FlashOptim cuts LLM training memory by 50 percent

LLM training optimization
LLM training optimization

This article is part of our coverage of the latest in AI research.

Training large language models is an expensive endeavor, largely due to the massive accelerator memory required for each parameter during the training process. To reduce the costs, researchers at Databricks introduced FlashOptim, a suite of memory-optimization techniques designed for common deep learning optimizers. FlashOptim acts as a drop-in replacement that slashes per-parameter memory consumption by more than 50 percent. It achieves this without sacrificing training throughput or model quality. According to the research team, this efficiency “enables practitioners and researchers with limited hardware to train larger models than previously feasible.”

How sparse attention solves the memory bottleneck in long-context LLMs

Sparse attention

Subscribe to continue reading

Become a paid subscriber to get access to the rest of this post and other exclusive content.

How ‘semantic chaining’ jailbreaks image generation models

Semantic chaining attack
Semantic chaining attack

NeuralTrust researchers have identified a critical vulnerability in the safety architecture of leading multimodal models, including Grok 4, Gemini Nano Banana Pro, and Seedance 4.5. The technique, named “Semantic Chaining,” allows users to bypass core safety filters and generate prohibited content by exploiting the models’ ability to perform complex, multi-stage image modifications. This discovery demonstrates a functional flaw in how multimodal intent is governed, proving that even advanced models can be guided to produce policy-violating outputs by bypassing “black box” safety layers.

How Sakana AI’s new technique solves the problems of long-context LLM tasks

LLM context management
LLM context management

This article is part of our coverage of the latest in AI research.

A new technique developed by researchers at Sakana AI, called Context Re-Positioning (RePo), allows Large language models (LLMs) to dynamically re-organize their internal view of their input data to better handle long-context tasks. 

LLMs process information in a strictly linear fashion, reading input from left to right regardless of the content’s actual structure. While this mimics how humans read a novel, it fails to capture the complexity of real-world data such as coding repositories, databases, or scattered documents in a retrieval-augmented generation (RAG) pipeline. 

Smarter trade: How AI turns regulatory burden into competitive edge

Digital supply chain

By Juliet Mirambo

Digital supply chain

Global trade has always been complex, but in recent years, the pace of regulatory change has accelerated. Companies now face a maze of overlapping rules that vary by country, region, and industry. A single overlooked regulation can lead to multimillion-dollar fines, blocked shipments, reputational damage, or even loss of market access. For businesses operating across borders, keeping up has become a high-stakes challenge.

Artificial Intelligence is emerging as one of the most promising tools to manage this complexity. By automating due diligence, monitoring supply chains in real time, and flagging risks before they escalate, AI is helping organizations close the compliance gap. What was once a reactive process of catching mistakes after they happened is becoming a proactive system of identifying risks in advance.