Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The New Frontier: Why You Can't Secure AI Without Securing APIs

The release of a new KuppingerCole Leadership Compass is always a significant event for the cybersecurity industry, offering a vendor-neutral view of the market's current state. The 2025 edition, focusing on API Security and Management, is critical as it arrives at a pivotal moment for technology. It clearly presents a fact many organizations are just beginning to understand: the crucial connection between the rise of Artificial Intelligence and the necessity for robust API security.

Beyond the Prompt: Securing the "Brain" of Your AI Agents

Imagine an autonomous AI agent tasked with a simple job: generating a weekly sales report. It does this reliably every Monday. But one week, it doesn't just create the report. It also queries the customer database, exports every single record, and sends the file to an unknown external server. Your firewalls saw nothing wrong. Your API gateway logged a series of seemingly valid calls. So, what happened? The agent wasn't hacked. Its mind was changed.

Beyond Anomalies: How Autonomous Threat Hunting Uncovers the Full Attack Story

APIs are essential in today's digital landscape, supporting everything from mobile apps to vital backend systems. As their importance grows, they also become attractive targets for advanced attackers who bypass traditional security methods. These adversaries do not simply exploit API flaws; instead, they mimic normal user behavior to launch subtle, slow-and-low attacks that are difficult for conventional tools to detect.

Seeing Your APIs Attack Surface Through an Attacker's Eyes: Introducing Salt Surface

Your API attack surface is larger and more exposed than you realize. In today's complex, cloud-native environment, APIs are deployed at an astonishing rate. While this rapid pace fuels innovation, it also creates a significant visibility gap. The APIs you are aware of and manage are only the tip of the iceberg. Your actual risk exists beneath the surface, in the undocumented, unmanaged, and forgotten APIs that traditional security tools completely overlook.

Securing the Next Era: Why Agentic AI Demands a New Approach to API Security

I’ve spent my career building solutions to protect the API fabric that powers modern businesses. I founded Salt because I saw that traditional security tools such as WAFs, gateways, and CDNs weren’t designed to see or secure APIs. That gap led to breaches, blind spots, and billions in risk. Today, we’re facing a new wave of risk that’s even bigger than the last. The rise of Agentic AI has brought us to a true inflection point. Agentic AI isn’t just another software layer.

LLMs Are Not Goldfish: Why AI Memory Poses a Risk to Your Sensitive Data

We’ve all heard the myth: goldfish have a memory span of just a few seconds. While that’s debatable in marine biology circles, it’s useful as a metaphor in tech, especially when talking about memory, risk, and AI. The problem is, large language models (LLMs) are not goldfish. In fact, they have incredible memory. And increasingly, that memory isn’t just session-based. It’s persistent, long-term, and system-connected. That changes everything.

When AI Agents Go Rogue: What You're Missing in Your MCP Security

We’re at a major inflection point in how software operates. And I don’t say that lightly. For the past decade, we’ve seen a steady evolution toward microservices, APIs, and cloud-native architectures. But Agentic AI is something different. We’re no longer talking about static services. We’re now dealing with autonomous agents that reason, remember, and act in real-time across live environments.

CISO Alert: Lessons from McDonald's Chatbot Breach

In June 2025, a disturbing security failure surfaced involving McDonald’s AI-powered hiring assistant, Olivia, operated by Paradox.ai. The platform, designed to screen job applicants via chatbot, exposed the personal information of over 64 million people. That included names, contact info, shift preferences, and even chat transcripts. The root cause? A combination of missteps that highlight the growing risk of insecure APIs in modern, AI-driven systems.