Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Fixing Fix Fatigue: Building Developer Trust for Secure AI Code

AI coding assistants are transforming the way developers work. With a prompt and a click, entire blocks of logic appear, boilerplate fades into the background, and velocity shoots up. But as anyone who’s integrated these tools into their daily routine can tell you, increased speed can come with increased risk. Vulnerabilities sneak in. Fixes pile up. And somewhere in the blur, developer trust begins to erode.

Eliminate the BOREDOM and Focus on the FUN: How to Use OpenAI Codex Cloud

In this video, I dive into OpenAI's Codex Cloud, showcasing how you can write, edit, and run code with the power of AI—directly in your browser. Whether you're a developer, student, or just curious about what AI can do for coding, this walkthrough gives you a hands-on look at how Codex Cloud makes programming faster, smarter, and easier.

Understanding CRA Compliance: Overcoming Challenges with an Integrated Security Testing Approach

Shipping software into the EU now comes with serious strings attached. The Cyber Resilience Act (CRA), in effect since December 2024, sets strict new rules for any company offering digital products or services in the region, whether you’re a local startup or a global platform. The regulation aims to improve cybersecurity across connected devices and cloud-based software.

Why AI Trust Will Shape Your Next Decade of Software Development

AI is often compared to electricity, but without trust, it’s just a live wire. As organizations adopt AI to move faster, reduce manual effort, and push the boundaries of what’s possible, one truth is becoming clear: trust in AI isn’t optional. It’s foundational. And for software development teams, AI Trust is now the north star that guides safe, scalable innovation.

Cursor's One-click Install MCP in Action

In this video, I’m checking out the brand new Cursor 1.0 release and testing one of its most exciting new features — the one-click MCP install. Setting up MCP servers has never been this easy! Join me as I walk through the process, share my first impressions, and see how smooth (or not) the setup really is. If you’ve been curious about Cursor or want to simplify your MCP workflows, this one’s for you.

Building AI Trust with Snyk Code and Snyk Agent Fix

Many businesses are using AI to innovate and boost productivity. But to truly benefit from AI, you need to trust it. That's where the Snyk AI Trust Platform comes in. As we announced at the 2025 Snyk Launch, the Snyk AI Trust Platform is designed to unleash innovation, reduce business risk, and accelerate software delivery in the age of AI.

Scan your AI-generated code from Cursor using Model Context Protocol (MCP)

We’re happy to announce that Cursor has validated Snyk’s CLI MCP server and added Snyk to their curated set of MCP tools from official providers. At Snyk, we recognized early on that although AI assistants accelerate development, they can inadvertently introduce vulnerable patterns, leverage outdated libraries, or even code with known security flaws. In order to maintain the rapid iteration cycles that AI enables, developers need security to be as agile as AI itself.

The New Threat Landscape: AI-Native Apps and Agentic Workflows

Businesses are moving beyond AI experiments and proofs of concept. As we approach what IDC is predicting will be the “AI pivot years” of 2025-2026, organizations are prioritizing, planning, and building for scale. This shift includes AI agents — self-directed tools that automate tasks — as technology providers strive to simplify development workflows. Under the surface, AI systems expose an expanded threat landscape that spans the software development lifecycle (SDLC).

Catch Bugs Faster: Cursor's BugBot for AI Code Review

In this video we dive into Cursor's 1.0 release, focusing on their new BugBot feature. This AI-powered tool integrates with your GitHub workflow to automatically review pull requests and identify potential bugs. We'll show you how to set up BugBot, trigger it on a pull request, and analyze the issues it finds, including a real-world example of it catching errors in AI-generated code from Google's Jules tool.

Announcing a Dedicated Snyk API & Web Infrastructure Instance for Asia-Pacific

Snyk is delighted to announce a significant milestone for our customers and partners in the Asia-Pacific (APAC) region: the launch of a dedicated Snyk API & Web infrastructure instance, which is now available and hosted locally within the region. This investment addresses the critical needs of our growing customer base in the region, ensuring that they can benefit from our modern, developer-first DAST capabilities while meeting local data residency and compliance requirements.

Why ANZ Technology Leaders Are Rethinking How AI, Speed, and Security Intersect

The pace of technological change is always fast, but with AI everywhere, things have gone into overdrive. In Australia and New Zealand, businesses plan to spend heavily on generative AI—about $15 million on average, more than the global average. This puts immense pressure on technology, security, and engineering leaders. They must innovate quickly, but they also face complex risks from AI. This is forcing them to rethink how speed and security can work together.

Build Fast, Stay Secure: Guardrails for AI Coding Assistants

AI coding assistants like GitHub Copilot and Google Gemini Code Assist are changing how developers work — accelerating delivery, removing repetition, and giving teams back time to build. But speed isn’t free. Studies show that around 27% of AI-generated code contains vulnerabilities, not because the tools are broken, but because they generate code faster than most teams can review it. The result? A growing wave of insecure code is making it into production.

Finding Software Flaws Early in the Development Process Provides Clear ROI

Organizations spend enormous effort fixing software vulnerabilities that make their way into their public-facing applications. The Consortium for Information and Software Quality estimated that the cost of poor software quality in the United States reached $2.41 trillion in 2022, a number sure to be much higher today. That’s nearly 10% of the current GDP within the US. As we will show, it makes sense that the cost of poor software quality is so high.

Transform Your AppSec Program With the Power of Snyk Analytics

As AI-generated code continues to boost developer productivity – and with it the number of vulnerabilities in code – the need for a programmatic approach to security within a fully AI-enabled reality is key. AI Trust and governance is the new standard for the AI era, and is achieved through visibility, prioritization, and policy. With this in mind, over time, Snyk has expanded the number of reports and analytics provided in its platform to address this need.

Humans at the Center: Redefining the Role of Developers in an AI-Powered Future

In a previous blog, we discussed how AI is reshaping software development at every level. This shift means developers need new skills to stay effective. In fact, Gartner predicts that generative AI will require 80% of the engineering workforce to upskill through 2027. So what can today’s developers do to stay ahead? Here are a few steps to consider.

Snyk for Government Achieves FedRAMP Moderate Authorization: A Milestone for Secure Government Software

Today marks a significant milestone for Snyk and, more importantly, for the security posture of the U.S. government. I'm thrilled to introduce Snyk for Government, our FedRAMP Moderate authorized solution for the public sector. This authorization underscores our unwavering commitment to providing secure development solutions that meet the rigorous standards of the Federal Risk and Authorization Management Program (FedRAMP). It means that U.S.

The Future of Developer Upskilling Is Human-Led, AI-Supported

In the last year, generative AI has dramatically accelerated how software is written. Developers can generate entire functions with a prompt, automate repetitive logic, and offload everything from boilerplate code to documentation. But with this newfound speed comes a deeper, more complex challenge: ensuring that what’s being created is secure, trustworthy, and production-ready.

Can Google Jules Build a SECURE Note Taking App?

In this video, I test out Google Jules, Google’s brand new AI developer assistant, to see if it can build a secure note-taking app from scratch. With a focus on privacy, authentication, and data protection, I challenge Jules to create something functional and secure. This is part of an ongoing series where I test different AI models and tools to see how well they handle real-world development tasks. Check out our playlist where we're putting these various models to the test!

AI Trust in Action: How Snyk Agent Redefines Secure Development

One word defines success or failure in the race to adopt AI in security workflows: trust. While the industry moves fast toward automation and autonomy, adoption often stalls when developers and the teams supporting them can’t trust what the AI delivers. It’s not enough for a tool to explain what it did. Developers want to know: Did it actually fix the problem? Will this change break something else? Can I rely on it again next time? Nowhere is that skepticism more justified than in security.