Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

6 Lessons Security Leaders Must Learn About AI and APIs

Most organizations treating AI security as a model problem are defending the wrong layer. Security teams filter prompts, patch jailbreaks, and tune model behavior, which is all necessary work, while the actual attack surface sits largely unexamined underneath. That surface is the API layer: the endpoints AI systems use to retrieve data, call tools, and take action on behalf of users. This isn't a theoretical gap.

Attacking the MCP Trust Boundary

Every secure API draws a line between code and data. HTTP separates headers from bodies. SQL has prepared statements. Even email distinguishes the envelope from the message. The Model Context Protocol (MCP), the fast-growing standard for connecting AI agents to external services, inherits that gap from the models it sits on top of.

The Governance Gap: How the EU AI Act Makes API Security a Compliance Imperative

Your legal team just handed you a 400-page document and said "figure out compliance." The EU AI Act is live, your organization falls under its scope, which is broader than many expect. Even non‑EU companies must comply if their AI systems are used, deployed, or produce effects within the European Union. In practice, that means that global organizations building or integrating AI models cannot treat the Act as a regional regulation.

Why API Discovery Is the First Step to Securing AI

AI risk doesn’t live in the model. It lives in the APIs behind it. Every AI interaction triggers a chain of API calls across your environment. Many of those APIs aren’t documented or tracked. That’s your real exposure. Shadow API discovery gives you visibility into those hidden endpoints, so you can find them before attackers do. If you don’t know which APIs your AI relies on, you can’t secure the system.

CISO Spotlight: Dimitris Georgiou on Building Security that Serves People First

Dimitris Georgiou has been a self-professed computer geek since the early 80s. At university, he studied the convergence of educational technology with computer science as part of his psychology MA – finding, to his disbelief, that systems were perilously insecure. Since then, he’s always worked in and around cybersecurity.

APIs Are Critical Infrastructure. Why Aren't We Treating Them That Way?

‍In this session, we take an in-depth look at what it truly means to treat APIs as critical infrastructure. Using industry data and real-world examples, we explore the gap between how much businesses rely on APIs and how well they are actually protected. And we talk about why that gap introduces operational and regulatory risks.

AppSec in the age of AI: An RSA Conference preview

Application security is at a breaking point as development teams move faster than ever, aided by AI-powered coding assistants. While these tools boost productivity, they also introduce subtle errors and insecure patterns at scale. The result: a growing backlog of vulnerabilities that outpaces traditional AppSec models. This webcast examines the risks and opportunities of AI in AppSec and who will be addressing it at RSA Conference. We’ll explore how defenders can use AI to level the playing field with automated scanning, intelligent prioritization, and secure-by-design practices.

The CISO's Dilemma: How To Scale AI Securely

Your board wants AI. Your developers are building with it. Your budget committee is asking for an ROI timeline. But as CISO, you're the one who has to answer when the inevitable question comes up: "How do we know this is secure?" If you're like most security leaders, you're caught between two impossible positions. Say yes to AI initiatives without proper security controls, and you're responsible when something goes wrong.

Agent-to-Agent Attacks Are Coming: What API Security Teaches Us About Securing AI Systems

AI systems are no longer just isolated models responding to human prompts. In modern production environments, they are increasingly chained together – delegating tasks, calling tools, and coordinating decisions with limited or no human oversight. Almost all that communication happens through APIs. This shift offers enormous productivity benefits. But it has also complicated security. Because as soon as systems can talk to each other, they can be attacked through each other.