Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

ServiceNow's Virtual Agent Vulnerability Shows Why AI Security Needs Traditional AppSec Foundations

The recent disclosure of what security researchers are calling "the most severe AI-driven vulnerability uncovered to date" in ServiceNow's platform serves as a stark reminder: securing agentic AI isn't just about new AI-specific controls; it requires getting the fundamentals right first.

A New Era for AI Coding? GPT 5.2 vs. Security Vulnerabilities

Can OpenAI’s GPT 5.2 actually build a production-ready, secure application from a single prompt? In this video, we put the latest model to the test by asking it to build a full-stack Node.js note-taking app. We evaluate its dependency choices, dive into a surprising fix for a long-standing CSRF vulnerability, and run a full security audit using Snyk. Is this the new gold standard for AI coding models?

Beyond Detection: Building a Resilient Software Supply Chain (Lessons from the Shai-Hulud Post-Mortem)

The Shai-Hulud npm supply chain incident was a wake-up call for the industry. The attack involved malicious packages containing hidden exfiltration scripts that targeted developers’ machines and CI environments. At Snyk, we watched this incident unfold in real-time, observing how quickly attackers can pivot from one compromised credential to a full-scale ecosystem infection.

Secure by Default: Why Snyk and Augment Code are the New Standard for AI Development

AI coding assistants have fundamentally changed development velocity. With tools like Augment Code, developers can now build and iterate at a pace that was unimaginable just a few years ago. However, this explosion in speed has created a new challenge: security teams, often still relying on manual review processes, are becoming the bottleneck.

A New Model You Haven't Heard About (GitHub Raptor Mini)

Can an under-the-radar AI tool actually build a secure, functional CRUD note-taking app from scratch? In this video, I put GitHub Raptor Mini to the test to see if it can design, implement, and reason through a real-world CRUD application — including authentication, data handling, and basic security considerations.

The Holiday Whisper: Shai-Hulud 3.0

The end-of-year holiday period is traditionally a time for code freezes and quiet rotations; however, it is also a favored window for opportunistic attackers. Threat actors love the holidays; they know that with development teams out of the office and response times naturally lagging, a small window opens for them to test new exploits without immediate detection. Recently, a security researcher discovered a new, contained variant of Shai-Hulud, dubbed "The Golden Path" (v3.0).

From Code to Agents: Proactively Securing AI-Native Apps with Cursor and Snyk

The rapid adoption of AI agents for development is creating a critical security gap. We are moving from predictable logic, deterministic code paths, and human-driven workflows to non-deterministic agents that reason, plan, and act autonomously using large language models across the broader software development lifecycle. As enterprises adopt these autonomous AI agents, the core challenge isn’t just the new risks and attack vectors; it’s a loss of runtime control.