Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Navigating Enterprise AI Implementation: Risks, Rewards, and Where to Start

At Snyk, we believe that AI innovation starts with trust, which must be earned through clear governance, sound security practices, and proven value delivery. As we scale our AI initiatives across the business, we’re continually refining how to implement AI in a way that is not just fast and functional, but also secure and responsible.

Cursor IDE Malware Extension Compromise in $500k Crypto Heist

Cursor IDE, as many are aware, is a fork of the open source and popular VS Code IDE project from Microsoft. Similarly to VS Code, Cursor has support for IDE extensions, which prompts many developers to migrate over with their favorite extensions and long-lived workflows, shortcuts, themes, and other configurations. Back in May 2021, Snyk’s Security Labs conducted research that uncovered VS Code extensions vulnerable to insecure code patterns.

From Hype to Trust: Building the Foundations of Secure AI Development

Generative AI and Agentic AI are changing everything from who writes software to how we define secure architecture. At Snyk’s recent Lighthouse event in NYC, leaders from cloud, security, and development teams came together to answer one essential question: how do we move fast with AI without breaking trust? The answer? Start with visibility, bake in security by design, and never lose sight of the humans behind the code.

Minimizing False Positives: Enhancing Security Efficiency

Organizations waste enormous amounts of time chasing down security alerts that turn out to be nothing. Recent research from May 2025 shows that 70% of a security team's time is spent investigating alerts that are false positives, wasting massive amounts of time in the investigation rather than working on proactive security measures to improve organizational security posture.

Fixing Fix Fatigue: Building Developer Trust for Secure AI Code

AI coding assistants are transforming the way developers work. With a prompt and a click, entire blocks of logic appear, boilerplate fades into the background, and velocity shoots up. But as anyone who’s integrated these tools into their daily routine can tell you, increased speed can come with increased risk. Vulnerabilities sneak in. Fixes pile up. And somewhere in the blur, developer trust begins to erode.

Understanding CRA Compliance: Overcoming Challenges with an Integrated Security Testing Approach

Shipping software into the EU now comes with serious strings attached. The Cyber Resilience Act (CRA), in effect since December 2024, sets strict new rules for any company offering digital products or services in the region, whether you’re a local startup or a global platform. The regulation aims to improve cybersecurity across connected devices and cloud-based software.

Why AI Trust Will Shape Your Next Decade of Software Development

AI is often compared to electricity, but without trust, it’s just a live wire. As organizations adopt AI to move faster, reduce manual effort, and push the boundaries of what’s possible, one truth is becoming clear: trust in AI isn’t optional. It’s foundational. And for software development teams, AI Trust is now the north star that guides safe, scalable innovation.

Building AI Trust with Snyk Code and Snyk Agent Fix

Many businesses are using AI to innovate and boost productivity. But to truly benefit from AI, you need to trust it. That's where the Snyk AI Trust Platform comes in. As we announced at the 2025 Snyk Launch, the Snyk AI Trust Platform is designed to unleash innovation, reduce business risk, and accelerate software delivery in the age of AI.