Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Voice of Security 2026: AI is everywhere yet manual work persists

AI adoption in security has soared. But for many teams, manual work and burnout remain stubbornly high. To understand why, and what security teams must do next, we partnered with Sapio research to survey more than 1,800 security leaders and practitioners worldwide for our Voice of Security 2026 report. We wanted to learn how teams are using AI and automation, how the role of security is evolving, and how professionals believe AI will impact their careers. The data is revealing.

Stop Staring at JSON: How GenAI is Solving the API "Context Crisis"

There is a moment that happens in every SOC (Security Operations Center) every day. An alert fires. An analyst looks at a dashboard and sees a UR: POST /vs/payments/proc/77a. And then they stop. They stare. And they ask the question that kills productivity: "What does this thing actually do?" Is it a critical payment gateway? A test function? Does it handle credit card numbers or just transaction IDs?

MCP & AI Agent Security: Addressing the Growing Data Exfiltration Vector

The security landscape is shifting. For the past two years, security teams have focused primarily on what users type into chatbots by monitoring interactions with ChatGPT, Gemini, and Claude. But a new risk vector is emerging, one that operates largely outside traditional security controls: AI agents accessing corporate data autonomously through the Model Context Protocol (MCP).

From IDE to CLI: Securing Agentic Coding Assistants

Today we’re excited to announce that Zenity now protects the most powerful, enterprise-critical coding assistants - Cursor, Claude Code, and GitHub Copilot - from build-time to runtime. As AI becomes a first-class developer tool, Zenity gives security teams the visibility and control they need to safely embrace coding assistants everywhere they’re used, in IDEs, CLIs or in the cloud.

Semantic Guardrails for AI/ML - Protegrity AI Developer Edition

In this installment of our AI Developer Edition Set-up series, Dan Johnson, a software engineer at Protegrity, introduces semantic guardrails. Learn how to protect your LLM and chatbot workflows from malicious prompts and insecure AI responses. As AI becomes central to enterprise operations, controlling the context of conversations is a major challenge. Semantic guardrails provide a safety layer that ensures your AI stays on topic and never leaks sensitive PII.

Agentic SecOps Workspace (ASW) office hours with LimaCharlie

Join us for a special Defender Fridays Office Hours session where the LimaCharlie team demonstrates the new Agentic SecOps Workspace (ASW) and explores what's possible when AI agents operate security infrastructure directly. At Defender Fridays, we delve into the dynamic world of information security, exploring its defensive side with seasoned professionals from across the industry. Our aim is simple yet ambitious: to foster a collaborative space where ideas flow freely, experiences are shared, and knowledge expands.

Sumo Logic's 2026 Security Operations Insights report: AI, siloed tools, and team alignment

Security threats have always been expanding and evolving, but recent data shows that modern applications are more complex for security and operations than ever before. And AI is only a piece of that puzzle. To stay on top of the changing market and hear directly from security leaders on what’s really top of mind, Sumo Logic surveyed over 500 security leaders with the help of UserEvidence. We asked about data pipelines, tool sprawl, confidence in SIEM, and, of course, AI.