Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

First Look, Then Leap: Why Observability is the First Step in Securing your AI Agents

AI Agents aren’t coming - they’re already here! reshaping industries, enhancing productivity, and unlocking new possibilities. Embedded in tools like Microsoft 365 Copilot, Salesforce Einstein, and custom-built assistants, they’re making decisions, automating workflows, and interacting with sensitive business data in real time. This wave of innovation is moving fast, but for once, security doesn’t have to play catch-up.

AI Agents Take DC: Inside Washington's Developing Agentic Security Agenda

AI Agents have become one of the most discussed emerging technologies in enterprise environments, and now, they’ve captured the attention of policymakers in Washington, DC. Over the past several weeks, a series of developments have brought AI Agents into the national spotlight, particularly through the lens of cybersecurity and regulatory preparedness.

Securing the Model Context Protocol (MCP): A Deep Dive into Emerging AI Risks

In 2025, the rise of autonomous agents and developer-integrated copilots has introduced an exciting new interface paradigm: the Model Context Protocol (MCP). Originally proposed by Anthropic, MCP has quickly become the de facto open standard for allowing language models to securely interact with external tools, APIs, databases, and services. But as enterprise adoption surges, so do the risks - both novel and unanticipated.

2025 Gartner SRM Summit: From Gatekeeper to Enabler. How Security Leaders Can Embrace AI Agents with Confidence

The 2025 Gartner Security & Risk Management Summit was a wake-up call, and an opportunity, for anyone responsible for securing the future of AI. With over 1,700 AI use cases now reported across federal agencies and enterprise adoption growing at a breakneck pace, the message was clear: AI is no longer on the horizon. It’s here, it’s active, and it needs securing.

The Real AI Agent Risk Isn't Data Loss. It's Unauthorized Action.

Your AI Agent just updated a vendor’s payment details in your Enterprise Resource Planning (ERP) system based on a seemingly harmless prompt. No data was exfiltrated. No access policy was violated. But now, a $250,000 payment is sitting in a fraudulent bank account. This is the new face of AI risk. As enterprises adopt AI Agents - either off the shelf or custom built, security teams are facing a fast-moving shift.

Validating the Mission: Zenity Labs Research Cited in Gartner's AI Platform Analysis

Research is what turns cybersecurity from a reactive scramble into a proactive discipline. It’s how security teams uncover new threats, pressure-test defenses, and understand the unintended consequences of innovation (especially as AI Agents reshape the attack surface).At Zenity, research isn’t a side effort. It’s how we build, challenge, and ultimately secure what’s next.

Securing the future of AI Agents: Reflections from the Microsoft Build Stage

Standing on stage at Microsoft Build, surrounded by innovators shaping the future in the era of AI Agents, I felt equal parts inspired and responsible. Inspired by the rapid momentum around AI, and responsible for raising a flag about something we don’t talk about enough - how we secure the very systems that are now acting on our behalf. This post isn’t a recap, rather a continuation, a chance to go deeper into the story I shared (and the one we’re still writing.)

Zenity and Microsoft Copilot Studio Extend AI Agent Security from Buildtime to Runtime

As enterprises race to adopt AI Agents to drive productivity and innovation. We are excited to announce that Zenity and Microsoft Copilot Studio are joining efforts to enable full adoption of AI Agents. Together, Zenity and Microsoft Copilot Studio help organizations confidently build, deploy, and use AI Agents with built-in security and governance throughout the development and deployment process so they can accelerate adoption at scale.