Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Why Your "Skill Scanner" Is Just False Security (and Maybe Malware)

Maybe you’re an AI builder, or maybe you’re a CISO. You've just authorized the use of AI agents for your dev team. You know the risks, including data exfiltration, prompt injection, and unvetted code execution. So when your lead engineer comes to you and says, "Don't worry, we're using Skill Defender from ClawHub to scan every new Skill," you breathe a sigh of relief. You checked the box. But have you checked this Skills scanner?

How a Malicious Google Skill on ClawHub Tricks Users Into Installing Malware

You ask your OpenClaw agent to "check my Gmail." It replies, "I need to install the Google Services Action skill first. Shall I proceed?" You say yes. The agent downloads the skill from ClawHub. It reads the instructions. Then, it pauses. "This skill requires the 'openclaw-core' utility to function," the agent reports, displaying a helpful download link from the skill's README. "Please run this installer to continue." You copy the command. You paste it into your terminal. You have just been compromised.

I Built a Production-Ready App in 20 Minutes with Claude Opus 4.6

My boss dropped a bombshell at 4:00 PM: build a secure, production-ready app from scratch by tomorrow morning. Instead of panicking, I put Claude Opus 4.6 to the test. In this video, I walk you through the entire end-to-end process of using an AI agent to architect, code, and debug a full-stack application. We’ll look at "Plan Mode," how the AI handles environment errors (like Windows SQLite issues), and most importantly, how we verified the AI's code for security vulnerabilities using Snyk.

Snyk Finds Prompt Injection in 36%, 1467 Malicious Payloads in a ToxicSkills Study of Agent Skills Supply Chain Compromise

The first comprehensive security audit of the Agent Skills ecosystem reveals malware, credential theft, and prompt injection attacks targeting OpenClaw, Claude Code, and Cursor users Agent skills are reusable capability packages that instruct AI agents how to interact with tools, APIs, or system resources—and they're rapidly becoming standard in AI-powered development.

280+ Leaky Skills: How OpenClaw & ClawHub Are Exposing API Keys and PII

On Monday, February 3rd, Snyk Staff Senior Engineer Luca Beurer-Kellner and Senior Incubation Engineer Hemang Sarkar uncovered a massive systemic vulnerability in the ClawHub ecosystem (clawhub.ai). Unlike the malware campaign we reported yesterday involving specific malicious actors, this new finding reveals a broader, perhaps more dangerous trend: widespread insecurity by design. In this write-up, Snyk is presenting Leaky Skills - uncovering exposed and insecure credentials usage in Agent Skills.

The Prescriptive Path to Operationalizing AI Security

In introducing the AI Security Fabric, we have outlined how security must evolve as software is built by humans, models, and autonomous agents working at machine speed. The Fabric defines the architectural shift required to build trust at AI speed, delivered through the Snyk AI Security Platform. We’re now focusing on the next question: how organizations put that vision into practice. Operationalizing AI security is not about enabling a single feature or deploying a tool.

Introducing the AI Security Fabric: Empowering Software Builders in the Era of AI

Today, we’re thrilled to introduce the AI Security Fabric, delivered through the Snyk AI Security Platform, and operationalized through a prescriptive path for AI security. As software creation shifts to humans, models, and autonomous agents working together at machine speed, security must evolve just as fundamentally. The AI Security Fabric defines the new paradigm, and the Prescriptive Path shows how the Snyk AI Security Platform gets you there.

Snyk Advisor is Reshaping Package Intelligence on Snyk Security Database

Choosing safe, healthy open source dependencies shouldn’t require jumping between tools or piecing together context from multiple places. Developers and AppSec teams need package health signals exactly where security decisions already happen. This is why we’re bringing Snyk Advisor data into security.snyk.io.