Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Axios npm Package Compromised: Supply Chain Attack Delivers Cross-Platform RAT

On March 31, 2026, two malicious versions of axios, the enormously popular JavaScript HTTP client with over 100 million weekly downloads, were briefly published to npm via a compromised maintainer account. The packages contained a hidden dependency that deployed a cross-platform remote access trojan (RAT) to any machine that ran npm install (or equivalent in other package managers like Bun) during a two-hour window. The malicious versions (1.14.1 and 0.30.4) were removed from npm by 03:29 UTC.

The 5 Principles of Snyk's Developer Experience

In the age of AI-driven development, speed is the new baseline. But as AI agents accelerate the pace of coding, they also amplify the risk of security bottlenecks. At Snyk, we believe a superior Developer Experience (DX) is the only way to secure this new frontier. DX is not just a layer on top of the product. It is the foundation that allows developers to unleash AI innovation securely. We think of DX as a system of decisions that compound over time.

From Discovery to Defense: Why AI Red Teaming Is the Next Step After AI-SPM

This week, we announced the general availability of Evo AI-SPM, the first operational layer of Snyk’s AI Security Fabric. AI-SPM gives security teams something they’ve never had before: a system of record for AI risk, with the ability to discover models, frameworks, datasets, and agent infrastructure embedded directly in code. For many organizations, that discovery step is a breakthrough.

The Next Era of AppSec: Why AI-Generated Code Needs Offensive Dynamic Testing

My colleague Manoj Nair recently wrote about the growing gap between what AI builds and what security teams actually test. He made the case that the speed of AI-driven development has fundamentally outpaced validation, and that the response can't be to slow down, but to change what testing means. I agree with every word.

AI Is Building Your Attack Surface. Are You Testing It?

The market is flooded with claims. One vendor tops a leaderboard. Another raises nine figures on a pitch deck. Meanwhile, your developers shipped three AI-generated services before lunch. Here's the conversation the industry isn't having, and the one we've been building toward for years. There's a version of this conversation happening inside every Security team right now. Someone demos an AI coding assistant. The speed is undeniable and the team is in awe. Still cautious, sometimes skeptical.

Securing the Agent Skills Registry: How Snyk and Tessl Are Setting the Standard

Agent skills are becoming the building blocks of AI-native software development, giving coding agents structured, versioned context, like how to use your APIs, how to build in your codebase, and how to enforce your team's policies. Developers install them from registries the same way they install npm packages or Python libraries. But unlike npm or PyPI, the agent skills ecosystem is new.

I Read Cursor's Security Agent Prompts, So You Don't Have To

Cursor's security team built four autonomous agents that review 3,000+ PRs per week, catch 200+ vulnerabilities, and open fix PRs automatically. The engineering is impressive, and the prompts are shockingly simple. But there's a meaningful gap between "LLM agents reviewing PRs" and "enterprise security program," and that gap is exactly where things get interesting.

The 89% Problem: How LLMs Are Resurrecting the "Dormant Majority" of Open Source

AI coding assistants are quietly resurrecting millions of abandoned open source packages. For the last decade, developers relied on a simple heuristic for open source security: Prevalence \= Trust. If a package was downloaded millions of times a week (lodash, react, requests), we assumed it was "safe enough" because thousands of eyes were on it. If it was obscure, we approached with caution.

The Rise of the AI Security Engineer: A New Discipline for an AI-Native World

We are witnessing the birth of a new profession in the blend of security engineering and security operations, a discipline that didn't exist five years ago because the systems it protects didn't exist five years ago. As artificial intelligence moves from experimental to essential and agentic systems begin to perceive, reason, act, and learn autonomously, we need defenders who can operate at the same velocity. I'm talking about the AI Security Engineer.