Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

GLM 4.7 vs. The Giants: Is This the New King of AI Coding?

Can a lesser-known model compete with the likes of OpenAI, Google, and Anthropic? In this video, we put Z.ai’s GLM 4.7 to the ultimate test. We task it with building a production-ready, secure Node.js note-taking application from a single prompt to see if its code quality and security stand up to the big name foundational models.

Live From Davos: The End of Human-Speed Security

This week, I am joining global policymakers and innovators in Davos for the World Economic Forum. The theme for 2026 is "A Spirit of Dialogue", a recognition that our toughest challenges require shared understanding and cooperation. As we gather to discuss the future of the global economy, we have an opportunity to lead an urgent conversation. It centers on the reality of artificial intelligence (AI), not the hype about what it might do, but on what it is already doing in our enterprises.

Testing MiniMax M2.1 for AI Coding: The Results Might Surprise You

Can "lesser-known" AI models actually keep up with the giants like Google, OpenAI, and Anthropic? In today’s video, we put MiniMax M2.1 to the ultimate test: building a production-ready, secure Node.js note-taking application from a single prompt. We’ll explore how to access MiniMax natively in the Windsurf IDE, walk through the debugging process for common errors (like environment variables and OS-specific dependencies), and perform a deep-dive security audit using Snyk. Stick around until the end to learn how to integrate MiniMax M2.1 into VS Code using OpenRouter.

ServiceNow's Virtual Agent Vulnerability Shows Why AI Security Needs Traditional AppSec Foundations

The recent disclosure of what security researchers are calling "the most severe AI-driven vulnerability uncovered to date" in ServiceNow's platform serves as a stark reminder: securing agentic AI isn't just about new AI-specific controls; it requires getting the fundamentals right first.

A New Era for AI Coding? GPT 5.2 vs. Security Vulnerabilities

Can OpenAI’s GPT 5.2 actually build a production-ready, secure application from a single prompt? In this video, we put the latest model to the test by asking it to build a full-stack Node.js note-taking app. We evaluate its dependency choices, dive into a surprising fix for a long-standing CSRF vulnerability, and run a full security audit using Snyk. Is this the new gold standard for AI coding models?

Beyond Detection: Building a Resilient Software Supply Chain (Lessons from the Shai-Hulud Post-Mortem)

The Shai-Hulud npm supply chain incident was a wake-up call for the industry. The attack involved malicious packages containing hidden exfiltration scripts that targeted developers’ machines and CI environments. At Snyk, we watched this incident unfold in real-time, observing how quickly attackers can pivot from one compromised credential to a full-scale ecosystem infection.

Secure by Default: Why Snyk and Augment Code are the New Standard for AI Development

AI coding assistants have fundamentally changed development velocity. With tools like Augment Code, developers can now build and iterate at a pace that was unimaginable just a few years ago. However, this explosion in speed has created a new challenge: security teams, often still relying on manual review processes, are becoming the bottleneck.