Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Scan your AI-generated code from Cursor using Model Context Protocol (MCP)

We’re happy to announce that Cursor has validated Snyk’s CLI MCP server and added Snyk to their curated set of MCP tools from official providers. At Snyk, we recognized early on that although AI assistants accelerate development, they can inadvertently introduce vulnerable patterns, leverage outdated libraries, or even code with known security flaws. In order to maintain the rapid iteration cycles that AI enables, developers need security to be as agile as AI itself.

The New Threat Landscape: AI-Native Apps and Agentic Workflows

Businesses are moving beyond AI experiments and proofs of concept. As we approach what IDC is predicting will be the “AI pivot years” of 2025-2026, organizations are prioritizing, planning, and building for scale. This shift includes AI agents — self-directed tools that automate tasks — as technology providers strive to simplify development workflows. Under the surface, AI systems expose an expanded threat landscape that spans the software development lifecycle (SDLC).

Catch Bugs Faster: Cursor's BugBot for AI Code Review

In this video we dive into Cursor's 1.0 release, focusing on their new BugBot feature. This AI-powered tool integrates with your GitHub workflow to automatically review pull requests and identify potential bugs. We'll show you how to set up BugBot, trigger it on a pull request, and analyze the issues it finds, including a real-world example of it catching errors in AI-generated code from Google's Jules tool.

Announcing a Dedicated Snyk API & Web Infrastructure Instance for Asia-Pacific

Snyk is delighted to announce a significant milestone for our customers and partners in the Asia-Pacific (APAC) region: the launch of a dedicated Snyk API & Web infrastructure instance, which is now available and hosted locally within the region. This investment addresses the critical needs of our growing customer base in the region, ensuring that they can benefit from our modern, developer-first DAST capabilities while meeting local data residency and compliance requirements.

Why ANZ Technology Leaders Are Rethinking How AI, Speed, and Security Intersect

The pace of technological change is always fast, but with AI everywhere, things have gone into overdrive. In Australia and New Zealand, businesses plan to spend heavily on generative AI—about $15 million on average, more than the global average. This puts immense pressure on technology, security, and engineering leaders. They must innovate quickly, but they also face complex risks from AI. This is forcing them to rethink how speed and security can work together.

Build Fast, Stay Secure: Guardrails for AI Coding Assistants

AI coding assistants like GitHub Copilot and Google Gemini Code Assist are changing how developers work — accelerating delivery, removing repetition, and giving teams back time to build. But speed isn’t free. Studies show that around 27% of AI-generated code contains vulnerabilities, not because the tools are broken, but because they generate code faster than most teams can review it. The result? A growing wave of insecure code is making it into production.