Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Zenity Joins CoSAI: Why Agentic AI Standards Need Practitioners at the Table

The agentic AI security standards your enterprise will adopt in the next 18 months are being written right now, inside working groups most CISOs have never heard of. The Coalition for Secure AI (CoSAI), an OASIS Open Project with more than 45 sponsor organizations, including Google, Microsoft, NVIDIA, IBM, and Meta, is producing the frameworks, reference architectures, and secure design patterns that will define how autonomous agents operate inside enterprise environments.

Composable AI Agents and the SOC That Runs Itself

Picture a SOC that investigates its own alerts, hunts threats across customer tenants, isolates compromised endpoints, and writes its own detection rules. Envision the same SOC attacking itself every morning to find the gaps it missed, all before your analysts arrive for the day. This is not a roadmap item, but an operational reality on LimaCharlie. It’s what agentic AI security looks like on a platform built to support it.

Announcing Justification Coach: AI-Powered Guidance for Better Access Requests and Stronger Audits

Today, we’re introducing Justification Coach, a new AI-powered capability that helps users write better access request justifications in real time, so admins get the context they need for audits and investigations without having to chase people down after the fact.

New KnowBe4 Agent Risk Manager Addresses Pervasive AI Agent Risk

By Roger A. Grimes and Matthew Duren AI agents can deliver incredible productivity gains, but their operational complexity makes effective threat modeling harder than ever, including for developers, administrators and especially end users. At the same time, both developers and non-developers are increasingly vibe-coding, or using AI to generate functional software from natural language prompts.

Claude Mythos Changed Everything. Your APIs Are the First Target.

Anthropic just released Claude Mythos Preview. They did not make it publicly available. That decision alone should tell you everything you need to know about what this model can do. During internal testing, Mythos autonomously discovered and exploited zero-day vulnerabilities across every major operating system and web browser. It found a 27-year-old bug in OpenBSD. A 16-year-old vulnerability in a widely used media codec.

Why Securing AI Code Generation is Critical for AppSec

The revolution is here, but it’s not what we expected. AI coding assistants have transformed software development, with developers shipping code faster than ever before. GitHub Copilot, Amazon CodeWhisperer, and Claude Code have become as essential to modern development as Git itself. The productivity gains are undeniable; what once took hours now takes minutes. But there’s a dangerous blind spot in this revolution: security.

Observability and Security for the AI Era

Datadog has always been driven by a broader vision of helping teams understand and operate complex systems. In this session, you’ll hear from Yanbing Li, Chief Product Officer, and Shri Subramanian, Group Product Manager, as they share the latest updates across the Datadog product suite and discuss how that vision continues to shape the platform’s evolution and support the next generation of AI-driven applications.

I Tried 5 Prompt Injection Attacks (Here's What Happened)

In this video, we explore the growing security risk of prompt injection in large language model (LLM) applications. As AI becomes embedded in more products, new vulnerabilities emerge, especially through natural language manipulation. We break down how LLMs work, the importance of system prompts, and demonstrate five real-world prompt injection techniques used to extract sensitive information or bypass safeguards. You’ll see live examples using different models and learn why newer models are more resilient, but still not immune.