Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

System Prompts Are Not Security Controls: A Deleted Production Database Proves It

On April 25th, a Cursor AI coding agent running Anthropic's Claude Opus 4.6, one of the most capable models in the industry, deleted the production database for PocketOS, a software platform used by car rental businesses across the country to manage their entire operations. The deletion took 9 seconds.

The Vendor to Beat, Built Before the Category Had a Name

A few years ago, we made a call that most of our industry was not ready to hear. AI agents were going to become the primary way enterprises get work done. Not as a concept, not as a research project, but as the operational reality of how the modern business runs. And the security infrastructure being built around them was designed for something fundamentally different. Prompt filtering. Model safety. Input guardrails.

AI Agents Are Already Running the Enterprise. Security Hasn't Caught Up.

For years, conversations about AI security risks were framed as forward-looking. Organizations were told to prepare for a future where autonomous agents would act on their behalf, access sensitive systems, and make consequential decisions without human intervention at every step. That future, it turns out, is now.

Agents Need Boundaries. The Market Is Starting to Agree.

Gartner published the inaugural Hype Cycle for Agentic AI last week (and yes, we’re included in two subcategories - Agentic AI Security and Guardian Agent). A few things worth noting. It's inaugural, Gartner publishes over 130 Hype Cycles a year, and standing up a new one signals that a space has earned its own map. And it dropped in April, months ahead of the June - August window when these things usually appear.

Zenity Joins CoSAI: Why Agentic AI Standards Need Practitioners at the Table

The agentic AI security standards your enterprise will adopt in the next 18 months are being written right now, inside working groups most CISOs have never heard of. The Coalition for Secure AI (CoSAI), an OASIS Open Project with more than 45 sponsor organizations, including Google, Microsoft, NVIDIA, IBM, and Meta, is producing the frameworks, reference architectures, and secure design patterns that will define how autonomous agents operate inside enterprise environments.

The Floor Was Selling AI. The Hallways Were Asking for Help.

One man’s perspective on RSA 2026 and what the AI agent security market actually looks like up close. Every year at RSA, there's a theme, not the official one printed on the lanyards, but the real one. The one that shows up in every booth conversation, every hallway argument, every dinner where people finally say what they wouldn't say on a panel. A few years back, it was cloud. Then zero trust took over and held the room for a while. XDR came through and confused everyone. Identity had its moment.

Context Engineering Is Security Engineering. RSA 2026 Made the Case.

Cisco polled its major enterprise customers before RSA 2026 and found something astounding. 85% of large enterprises are experimenting with AI agents. Only 5% have moved them into production. That's not a technology gap. The models work. The tools exist. The 80-point spread between experimentation and production is a governance gap. It's also a context gap.

RSA and DC Dispatches: Agentic AI Security Is the Story, Government Policy Needs to Catch Up

Fresh off two weeks of back-to-back meetings in Washington, DC, and on the floor/in the wings of the RSA Conference, one theme echoed through nearly every conversation I had with senior government officials and public policy leaders from global technology companies: agentic AI security is the defining emerging security challenge of this moment — and policy is not keeping pace.