Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

RSA and DC Dispatches: Agentic AI Security Is the Story, Government Policy Needs to Catch Up

Fresh off two weeks of back-to-back meetings in Washington, DC, and on the floor/in the wings of the RSA Conference, one theme echoed through nearly every conversation I had with senior government officials and public policy leaders from global technology companies: agentic AI security is the defining emerging security challenge of this moment — and policy is not keeping pace.

My First RSA: Agents, Challenges, and Community

I am no stranger to conferences, and certainly no stranger to security conferences. Over the years, BlackHat and DEFCON have both become staples of my calendar. But this year brought a new one to the list: RSA, and it truly lived up to the hype. The show floor was full of bright lights, fancy booths, and yes, tattoos, if you knew where to find them.

Securing the AI That Runs the Enterprise: Zenity + ServiceNow SecOps

As agents take on more responsibility, they also introduce a new class of security challenges, ones that traditional tools weren’t built to handle. This is why Zenity and ServiceNow have partnered to bring end-to-end agent security directly into ServiceNow SecOps, where security teams already operate.

OpenClaw Needs Real Security Controls; We Built Them Open Source

AI agent adoption and development are evolving quickly. The tooling used to build agents is improving fast, but the security controls around those agents are often rigid, opaque, or difficult to adapt to real environments. As more teams experiment with OpenClaw, one challenge becomes clear: developers need ways to inspect what agents are doing, evaluate risky behavior, and intervene when necessary.

The Shift to Continuous Context and the Rise of Guardian Agents

AI agent risk doesn’t emerge in a single moment. It develops over time across configuration changes, runtime behavior, long-horizon tasks, and interactions between agents, users, and enterprise systems. Their behavior and exposure can shift in real time as agents rewrite instructions, update memory, and dynamically alter execution.

Securing Homegrown Agents in Runtime: The Value of Zenity + Microsoft Foundry

How the integration works: Zenity integrates with the Foundry control plane to inspect agent behavior and enforce security policies inline at runtime. Over the past year, Microsoft Foundry has emerged as a cornerstone for enterprises building and deploying homegrown agents at scale. Organizations across industries are using Foundry to move beyond experimentation and into production, creating AI agents that can reason, invoke tools, access enterprise data, and automate complex workflows.

Why Soft Guardrails Get Us Hacked: The Case for Hard Boundaries in Agentic AI

One recurring theme in my research and writing on agentic AI security has been the distinction between soft guardrails and hard boundaries. As someone who serves on the Distinguished Review Board for the OWASP Agentic Top 10, and who spends every day thinking about how to secure agents across enterprise environments at Zenity, this distinction is not academic. It is potentially the single most important conceptual framework practitioners need to internalize right now.

AI Agent Governance: The CISO Checklist for the New AI Agent Reality

AI agents are rapidly becoming embedded in enterprise workflows, influencing revenue operations, customer engagement, development, and internal decision-making. As these systems gain autonomy and inherit access across SaaS, cloud, and endpoint environments, they introduce a new layer of operational and security risk that traditional controls cannot fully manage.

PerplexedBrowser: Accepting a Meeting or Handing Your Local Files to an Attacker?

How a routine calendar invite enabled silent local file access and data exfiltration Note: This post is part of a coordinated disclosure by Zenity Labs detailing the PleaseFix vulnerability family affecting the Perplexity Comet Agentic Browser. This blog focuses on browser-level autonomous agent execution and session compromise.

What a Rogue Vacuum Army Teaches Us About Securing AI

If you’re like me, you’ve been enthralled with the recent story, expertly written by Sean Hollister at The Verge, about how Sammy Azdoufal built a remote control for his DJI Romo vacuum with a PlayStation controller, and ended up in control of 7,000+ robovacs all over the world. On the surface, it sounds like vibe coding gone slightly sideways. I mean, really, what could a vacuum possibly do? Turns out… a lot.