Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Governing Agentic AI: A Practical Framework for the Enterprise

In my previous piece, "The Agentic AI Governance Blind Spot," I laid out what I believe is one of the most critical gaps in the AI governance landscape today: the three most cited frameworks in AI governance, NIST AI RMF, ISO 42001, and the EU AI Act, don’t contain a single mention of agentic AI. Not one reference to autonomous agents, multi-agent systems, or AI that takes actions with real-world consequences. The response to that piece confirmed what I suspected.

OpenClaw Security Checklist for CISOs: Securing the New Agent Attack Surface

OpenClaw exposes a fundamental misalignment between how traditional enterprise security is designed and how AI agents actually operate. As an AI agent assistant, OpenClaw operates with human permissions, executes actions autonomously, and processes untrusted content as input, all while sitting outside the visibility of conventional security tools.

The Agentic AI Governance Blind Spot: Why the Leading Frameworks Are Already Outdated

Approach any security, technology and business leader and they will stress the importance of governance to you. It’s a concept echoed across board conversations, among business and technology executives and of course within our own echo chamber of cybersecurity as well. For example, the U.S. Cybersecurity Information Security Agency (CISA) has a page dedicated to Cybersecurity Governance, which they define as.

From IDE to CLI: Securing Agentic Coding Assistants

Today we’re excited to announce that Zenity now protects the most powerful, enterprise-critical coding assistants - Cursor, Claude Code, and GitHub Copilot - from build-time to runtime. As AI becomes a first-class developer tool, Zenity gives security teams the visibility and control they need to safely embrace coding assistants everywhere they’re used, in IDEs, CLIs or in the cloud.

Seeing What AI Touches: Introducing Data Lens

Security teams are entering a new phase of risk driven by the combination of AI agents and broad access to internal and external data. Agents are no longer limited to responding to prompts. They read files, pull documents from shared repositories, query external sources, and move information across systems on behalf of users. This shift brings real business value. Knowledge becomes easier to access, workflows move faster, and information that once required deliberate effort can be surfaced instantly.

Securing AI Where It Acts: Why Agents Now Define AI Risk

In the first round of the AI gold rush, most conversations about AI security centered on models: large language models, training data, hallucinations, and prompt safety. That focus made sense when AI was largely confined to generating text, images, or recommendations. But that era is already giving way to something far more consequential.

GreyNoise Findings: What This Means for AI Security

Late last week, GreyNoise published one of the clearest signals we have seen that AI systems are no longer just research targets. They are operational targets. Their honeypot infrastructure captured 91,403 attack sessions between October 2025 and January 2026, revealing two distinct campaigns systematically mapping AI deployments at scale. This is a meaningful inflection point.

Advancing AI Security: Zenity's Contributions to MITRE ATLAS' First 2026 Update

MITRE ATLAS has become a critical resource for cybersecurity leaders navigating the rapidly evolving world of AI-enabled systems.Traditional threat models are built for human-initiated workflows, APIs, and infrastructure, so they are no longer sufficient to describe modern AI attacks..

Advancing MITRE ATLAS AI Security Through Zenity's Contributions

MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a globally recognized AI security framework that catalogs adversarial techniques targeting artificial intelligence systems. Similar in structure to MITRE ATT&CK but purpose-built for AI, machine learning, and agentic systems, ATLAS translates abstract AI risks into concrete, actionable attack techniques that security teams can monitor and mitigate.