Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

You Patched LiteLLM, But Do You Know Your AI Blast Radius?

For a brief window, a widely used open source package in the AI ecosystem was compromised with credential-stealing malware. LiteLLM, a model gateway used to route requests to more than 100 LLM providers, has been downloaded millions of times per day. In that short window, the malicious versions were likely pulled tens of thousands of times before being caught.

Building AI Security with Our Customers: 5 Lessons from Evo's Design Partner Program

In 2025, we embarked on a new journey to secure the most important technology transformation of this decade – generative AI. Our vision is to help companies secure their AI fast, so that they can innovate on the cutting edge and put AI and agentic use cases into production. To do this, we built Evo, the world’s first agentic orchestrator for AI security. The foundation of any product is customer needs.

Axios npm Package Compromised: Supply Chain Attack Delivers Cross-Platform RAT

On March 31, 2026, two malicious versions of axios, the enormously popular JavaScript HTTP client with over 100 million weekly downloads, were briefly published to npm via a compromised maintainer account. The packages contained a hidden dependency that deployed a cross-platform remote access trojan (RAT) to any machine that ran npm install (or equivalent in other package managers like Bun) during a two-hour window. The malicious versions (1.14.1 and 0.30.4) were removed from npm by 03:29 UTC.

GitHub Spark vs. Replit - Vibe Code Challenge

We pit GitHub Spark (in public preview) against Replit's AI agent. The challenge? Build a fully functional community forum for DIY tips from a single prompt. We compare design aesthetics, mobile responsiveness, login security, and deployment speed to see which tool creates a truly production-ready application. Which one do you think deserved the win? Let me know in the comments!

The 5 Principles of Snyk's Developer Experience

In the age of AI-driven development, speed is the new baseline. But as AI agents accelerate the pace of coding, they also amplify the risk of security bottlenecks. At Snyk, we believe a superior Developer Experience (DX) is the only way to secure this new frontier. DX is not just a layer on top of the product. It is the foundation that allows developers to unleash AI innovation securely. We think of DX as a system of decisions that compound over time.

From Discovery to Defense: Why AI Red Teaming Is the Next Step After AI-SPM

This week, we announced the general availability of Evo AI-SPM, the first operational layer of Snyk’s AI Security Fabric. AI-SPM gives security teams something they’ve never had before: a system of record for AI risk, with the ability to discover models, frameworks, datasets, and agent infrastructure embedded directly in code. For many organizations, that discovery step is a breakthrough.

The Next Era of AppSec: Why AI-Generated Code Needs Offensive Dynamic Testing

My colleague Manoj Nair recently wrote about the growing gap between what AI builds and what security teams actually test. He made the case that the speed of AI-driven development has fundamentally outpaced validation, and that the response can't be to slow down, but to change what testing means. I agree with every word.

AI Is Building Your Attack Surface. Are You Testing It?

The market is flooded with claims. One vendor tops a leaderboard. Another raises nine figures on a pitch deck. Meanwhile, your developers shipped three AI-generated services before lunch. Here's the conversation the industry isn't having, and the one we've been building toward for years. There's a version of this conversation happening inside every Security team right now. Someone demos an AI coding assistant. The speed is undeniable and the team is in awe. Still cautious, sometimes skeptical.