Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

You're Not Watching MCPs. Anthropic's Vulnerability Shows Why You Should Be.

Last week, researchers at OX Security published findings that should stop every security leader in their tracks. They discovered a critical vulnerability baked directly into Anthropic's Model Context Protocol SDK, affecting every supported language: Python, TypeScript, Java, and Rust. The result: remote code execution on any system running a vulnerable MCP implementation, with direct access to sensitive user data, internal databases, API keys, and chat histories. Over 7,000 publicly accessible servers.

Claude Mythos Changed Everything. Your APIs Are the First Target.

Anthropic just released Claude Mythos Preview. They did not make it publicly available. That decision alone should tell you everything you need to know about what this model can do. During internal testing, Mythos autonomously discovered and exploited zero-day vulnerabilities across every major operating system and web browser. It found a 27-year-old bug in OpenBSD. A 16-year-old vulnerability in a widely used media codec.

Everyone Is Securing the Wrong Layer of AI

The AI security market is crowded. Vendors are racing to protect prompts, harden models, detect jailbreaks, and scan for data leakage at the LLM layer. The investment is real. The intent is good. And most of it is missing the point. Here is the problem: agents do not just think. They act. They call APIs. They trigger workflows. They write to databases, send emails, move money, and modify production systems.

The AI Supply Chain is Actually an API Supply Chain: Lessons from the LiteLLM Breach

The recent supply chain attack involving Mercor and the LiteLLM vulnerability serves as a massive wake-up call for enterprise security teams. While the security industry has spent the last year fixating on prompt injections and model jailbreaks, this breach highlights a far more systemic vulnerability. The weakest link in enterprise AI is not necessarily the model itself. It is the middleware connecting the models to your data.

The Era of Agentic Security is Here: Key Findings from the 1H 2026 State of AI and API Security Report

The era of human-centric API consumption is officially ending. Over the past year, enterprises have rapidly transitioned from simply experimenting with Generative AI to deploying autonomous AI agents that drive core business operations. These agents act as digital employees. They utilize Large Language Models (LLMs) for reasoning, Model Context Protocol (MCP) servers for connectivity, and internal APIs for execution. This evolution has fundamentally altered the enterprise attack surface.

Everyone is Deploying AI Agents. Almost Nobody Knows What They're Doing

AI agents are operating inside your enterprise; querying databases, triggering workflows, and taking action through APIs. As AI agents are adopted, organizations cannot see, track, or control what these agents are actually doing. In this session, Roey Eliyahu, Co-Founder and CEO of Salt Security, challenges the industry’s narrow focus on LLM safety and exposes the much larger, invisible attack surface created by agentic systems.

How does Sisense stay on top of API Attacks?

Sisense powers analytics experiences inside the applications businesses rely on every day. As an API-first platform, securing those connections is critical, especially as AI agents increasingly operate through APIs to access data and trigger workflows. In this conversation, Sangram, CISO and VP of IT at Sisense, and Michael Callahan, CMO at Salt Security, discuss how Sisense approached API security strategically to protect their platform, maintain customer trust, and support innovation in the Agentic AI era.

The Agentic Stack Explained: How LLMs, MCP Servers, and APIs Work Together

The term AI agent is dominant in current cybersecurity discourse. Vendors, analysts, and CISOs all use the label, yet technical confusion remains regarding how agents actually operate and where the security risks reside. Beneath the surface-level familiarity, there is often significant confusion about what an AI agent actually is, how it operates technically, and most importantly for security teams, where the risk actually lives.

Everyone Is Deploying AI Agents. Almost Nobody Knows What They're Doing.

One constant I hear from CISOs I speak with is that AI agents are not coming. They are already inside organizations, reasoning through goals, selecting tools, and taking action through the same APIs that connect your most sensitive systems. And most security teams have no idea what those agents are doing.

An AI Agent Didn't Hack McKinsey. Its Exposed APIs Did.

This week’s McKinsey incident should be a wake-up call for every enterprise moving fast to deploy AI. Not because AI itself is inherently insecure. But because too many organizations are still thinking about AI security at the model layer, while the real enterprise risk sits in the action layer: the APIs, MCP servers, internal services, and shadow integrations that AI agents can reach, invoke, and manipulate. That is the part most companies still do not see.