Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

0Click Attacks: When TTPs Resurface Across Platforms

If there’s one lesson security teams should take from recent disclosures, it’s this: AI agent attack techniques don’t disappear - they resurface, across vendors and platforms, with only small variations. What researchers called out months ago is showing up again, now in Salesforce as the ForcedLeak vulnerability.

Zenity and Slalom Partner to Accelerate Secure AI Agent Adoption

Zenity, the leader in securing AI agents everywhere, is officially partnering with Slalom, a global business and technology consulting firm, who made the announcement today. This collaboration is designed to help enterprises safely and confidently adopt AI agents by combining Zenity’s robust security and governance platform with Slalom’s deep expertise in digital transformation and AI implementation.

Bridging AI Safety and AI Security: Reflections from the NYC AI Safety Meetup

The regularly occurring NYC AI Safety Meetups cover a variety of topics, with this latest session focusing on the convergence of AI Safety and AI Security. I had the fantastic opportunity to contribute to the conversation, it’s one that’s been budding for some time, but this was my first direct exposure.

Security for Autonomous Agents and Reducing Shadow AI

In the rapidly evolving field of AI, understanding the distinctions between how agentic workflows are initiated is crucial. While the verbiage among tech providers varies, it essentially comes down to whether an agent is prompted by a human from a chat interface or autonomously from external sources like emails, data changes, calendar invites, or otherwise.

Zenity Named a 2025 Cool Vendor in Gartner's Agentic AI TRiSM Report

Your security teams are facing an unprecedented challenge. AI agents are spreading across enterprises faster than anyone anticipated, from Microsoft 365 Copilot processing sensitive emails to custom agents built on AWS Bedrock accessing critical databases. Over 80% of Fortune 500 companies are already deploying these autonomous systems, oftentimes without adequate security guardrails. The result is a rapidly expanding attack surface that conventional security tools simply cannot see or secure.

Preventing AI Agents from Going Rogue: Zenity Collaborates with Microsoft Copilot Studio to Deliver Inline Protection Against Malicious Behavior

AI agents are autonomous, powerful, and deeply embedded in how modern businesses operate. From rerouting customer support emails to accessing critical business tools like email and CRM systems, agents are transforming workflows across departments. As of Microsoft’s Q1 2025 earnings report, over 230,000 organizations, including 90% of the Fortune 500, are using Microsoft Copilot Studio to build custom agents for a huge variety of tasks.

Why Detection? Why Now? Key Takeaways from the NIST NCCoE Public COI Working Session

In April, I had the amazing opportunity to participate in a unique AI security event put on by the National Cybersecurity Center of Excellence (NCCoE). The April event was all about getting the community together to discuss what a Cyber AI Profile should look like as an overlay to the NIST Cybersecurity Framework (CSF) 2.0.

Securing the AI Agent Era: One Control Panel Across SaaS, Endpoint, and Cloud

The companies winning with AI aren’t just deploying agents faster - they’re operationalizing them responsibly. They realize AI agents are creating a new, dynamic attack surface that traditional tools were never designed to handle. These agents span the entire enterprise ecosystem. Microsoft 365 Copilot, Copilot Studio, and Salesforce Agentforce are SaaS‑managed agents. GitHub Copilot, Cursor, and Claude desktop run directly on user devices as device‑based agents.

How Zenity Helps Enterprises Apply AI TRiSM to AI Agents

The future isn’t human vs machine, it’s human trying to govern machines. As AI agents grow more autonomous (like replying to emails, writing code, granting access, making decisions, etc.) the real threat isn’t a malicious model. It’s the absence of controls. AI Agents don’t come with built-in security policies. They don’t ask for permission. They simply do what they’re told (sometimes correctly, sometimes dangerously) because no guardrails told them otherwise.

When "Secure by Design" Isn't Enough: A Blind Spot in Power Platform Security Access Controls

Security Groups play a pivotal role in tenant governance across platforms like Entra, Power Platform, and SharePoint. They allow administrators to control access, enforce identity-aware security, and prevent unauthorized interactions. However, we’ve identified a security group bypass risk: Application Users (App Users) - Service Principal identities from Entra - can slip past Security Group restrictions, creating misaligned identity assumptions and enabling unauthorized data access.