Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

280+ Leaky Skills: How OpenClaw & ClawHub Are Exposing API Keys and PII

On Monday, February 3rd, Snyk Staff Senior Engineer Luca Beurer-Kellner and Senior Incubation Engineer Hemang Sarkar uncovered a massive systemic vulnerability in the ClawHub ecosystem (clawhub.ai). Unlike the malware campaign we reported yesterday involving specific malicious actors, this new finding reveals a broader, perhaps more dangerous trend: widespread insecurity by design. In this write-up, Snyk is presenting Leaky Skills - uncovering exposed and insecure credentials usage in Agent Skills.

The Prescriptive Path to Operationalizing AI Security

In introducing the AI Security Fabric, we have outlined how security must evolve as software is built by humans, models, and autonomous agents working at machine speed. The Fabric defines the architectural shift required to build trust at AI speed, delivered through the Snyk AI Security Platform. We’re now focusing on the next question: how organizations put that vision into practice. Operationalizing AI security is not about enabling a single feature or deploying a tool.

Introducing the AI Security Fabric: Empowering Software Builders in the Era of AI

Today, we’re thrilled to introduce the AI Security Fabric, delivered through the Snyk AI Security Platform, and operationalized through a prescriptive path for AI security. As software creation shifts to humans, models, and autonomous agents working together at machine speed, security must evolve just as fundamentally. The AI Security Fabric defines the new paradigm, and the Prescriptive Path shows how the Snyk AI Security Platform gets you there.

Snyk Advisor is Reshaping Package Intelligence on Snyk Security Database

Choosing safe, healthy open source dependencies shouldn’t require jumping between tools or piecing together context from multiple places. Developers and AppSec teams need package health signals exactly where security decisions already happen. This is why we’re bringing Snyk Advisor data into security.snyk.io.

Live From Davos: The End of Human-Speed Security

This week, I am joining global policymakers and innovators in Davos for the World Economic Forum. The theme for 2026 is "A Spirit of Dialogue", a recognition that our toughest challenges require shared understanding and cooperation. As we gather to discuss the future of the global economy, we have an opportunity to lead an urgent conversation. It centers on the reality of artificial intelligence (AI), not the hype about what it might do, but on what it is already doing in our enterprises.

ServiceNow's Virtual Agent Vulnerability Shows Why AI Security Needs Traditional AppSec Foundations

The recent disclosure of what security researchers are calling "the most severe AI-driven vulnerability uncovered to date" in ServiceNow's platform serves as a stark reminder: securing agentic AI isn't just about new AI-specific controls; it requires getting the fundamentals right first.

Beyond Detection: Building a Resilient Software Supply Chain (Lessons from the Shai-Hulud Post-Mortem)

The Shai-Hulud npm supply chain incident was a wake-up call for the industry. The attack involved malicious packages containing hidden exfiltration scripts that targeted developers’ machines and CI environments. At Snyk, we watched this incident unfold in real-time, observing how quickly attackers can pivot from one compromised credential to a full-scale ecosystem infection.

Secure by Default: Why Snyk and Augment Code are the New Standard for AI Development

AI coding assistants have fundamentally changed development velocity. With tools like Augment Code, developers can now build and iterate at a pace that was unimaginable just a few years ago. However, this explosion in speed has created a new challenge: security teams, often still relying on manual review processes, are becoming the bottleneck.

The Holiday Whisper: Shai-Hulud 3.0

The end-of-year holiday period is traditionally a time for code freezes and quiet rotations; however, it is also a favored window for opportunistic attackers. Threat actors love the holidays; they know that with development teams out of the office and response times naturally lagging, a small window opens for them to test new exploits without immediate detection. Recently, a security researcher discovered a new, contained variant of Shai-Hulud, dubbed "The Golden Path" (v3.0).

From Code to Agents: Proactively Securing AI-Native Apps with Cursor and Snyk

The rapid adoption of AI agents for development is creating a critical security gap. We are moving from predictable logic, deterministic code paths, and human-driven workflows to non-deterministic agents that reason, plan, and act autonomously using large language models across the broader software development lifecycle. As enterprises adopt these autonomous AI agents, the core challenge isn’t just the new risks and attack vectors; it’s a loss of runtime control.