Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

What an 'Aha' Moment with an Org Admin Token Taught One DevSecCon Speaker About AI Security

As the summer winds down and conversation around AI Security heats up, the Snyk team is in full swing planning mode for a double-header this October—with the return of DevSecCon’s Flagship conference, focusing this year on Securing the Shift to AI Native, and serving as the founding partner of the inaugural AI Security Summit.

Cato CTRL Threat Research: Threat Actors Abuse Simplified AI to Steal Microsoft 365 Credentials

AI marketing platforms have exploded in popularity, becoming everyday tools for creative teams in enterprises worldwide. Platforms like Simplified AI offer marketers the ability to generate content, clips, and campaigns at scale. For CISOs and IT leaders, approving such services often seems straightforward: allow access, whitelist the domain, and enable the marketing team to innovate.

Securing LLM Superpowers: When Tools Turn Hostile in MCP

In Part 1 of this blog series, we explored the architecture, capabilities, and risks of the Model Context Protocol (MCP). In this post, we will focus on two attack vectors in the MCP ecosystem: prompt injection via tool definitions and cross-server tool shadowing. Both exploit how LLMs trust and internalize tool metadata and responses, allowing attackers to embed hidden instructions or persistently influence future tool calls without direct user prompts.

The CSA AI Controls Matrix: A Framework for Trustworthy AI

The Cloud Security Alliance, a respected non-profit founded in 2008 to pursue cloud security assurance, has now unveiled its Artificial Intelligence Controls Matrix (AICM), a quiet revolution for trustworthy AI. It has come at a time when generative AI and large language models are moving quickly into every sector. These systems can transform business, but they can also fail, or be made to fail. Because of this, trust becomes the measure of success.

Examples of AI Privacy Issues in the Real World

What’s the fastest way to lose trust? Expose private data. With AI moving from pilots to core workflows in support, finance, HR, and healthcare, one careless prompt or leaky integration can turn into headlines, fines, and weeks of incident response. The most useful way to understand the risks is to study AI privacy issues examples from the real world.

4 ways to scale compliance with AI

You got compliant—congrats! That’s a big milestone. It tells customers, investors, and the world that you take security seriously. But compliance doesn’t stop at your first audit. As your company grows, so do the requirements. You’ll have to manage new frameworks, more policies, faster timelines, more scrutiny, and more complexity. ‍ Modern GRC teams need to do more with less.

How Synthesia Became One of Europe's Fastest-growing AI Companies | Frameworks for Growth

In this episode of Frameworks for Growth, Vanta CEO Christina Cacioppo sits down with Steffen Tjerrild, co-founder and COO/CFO of Synthesia, to talk about what it takes to scale one of the UK’s fastest-growing AI companies. They explore the future of AI-generated video, how Synthesia built category-defining technology, and why European values may shape the next chapter of AI development. Topics covered.

The Case of the Phantom Date: How a Single Pixel Fooled Our Visual AI

We’ve all seen it: a cutting-edge, multimodal LLM, capable of understanding complex documents, stumbles on a seemingly simple task. In our case, the model confidently reported a contract’s signing date as "March 30". The only problem? The document clearly stated "March 9th". It wasn't just a minor error; it was a baffling one that sent us down a rabbit hole of debugging.