Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

What is an AI-BOM? Why Static Manifests Fall Short

Your AI-BOM shows every model, tool, and data source you deployed. But when your SOC investigates an alert about unusual agent behavior, that inventory tells them nothing about what actually happened at runtime. Static AI-BOMs document what you intended to run. Attackers exploit what your AI workloads actually do in production: which APIs they call, what data they touch, and how they use approved tools in unapproved ways.

RSA and DC Dispatches: Agentic AI Security Is the Story, Government Policy Needs to Catch Up

Fresh off two weeks of back-to-back meetings in Washington, DC, and on the floor/in the wings of the RSA Conference, one theme echoed through nearly every conversation I had with senior government officials and public policy leaders from global technology companies: agentic AI security is the defining emerging security challenge of this moment — and policy is not keeping pace.
Featured Post

The UK's Cyber Action Plan marks the end of compliance-led security

The UK government's new £210 million Cyber Action Plan signals an important shift in how cyber risk is being addressed at a national level. Designed to strengthen cyber defences across government departments and the wider public sector, the plan establishes a new Cyber Unit and introduces stronger expectations around resilience, accountability and operational capability.

When AI Stops Assisting and Starts Acting

For decades, the service desk has operated on a simple assumption: humans must interpret every IT problem before action can be taken. A ticket is created. Teams investigate. Data is pulled from multiple tools. Eventually someone determines the root cause and decides what to do next. It works - but it's slow, reactive, and heavily manual. That assumption is starting to change. With Tanium AI agents in ServiceNow Now Assist for ITSM connected to Tanium's real-time endpoint intelligence, machines can now understand issues, analyze live telemetry, and recommend or execute remediation in seconds.

eBPF for AI Agent Enforcement: What Kernel-Level Security Catches (and What It Misses)

Your team deployed Tetragon six months ago. TracingPolicies are humming along—you’re catching unauthorized binary executions, blocking suspicious network connections, and generating seccomp profiles from observed behavior. Runtime security for your traditional workloads is solid. Then engineering ships their first autonomous AI agent into production. A LangChain agent connected to internal databases, external APIs through MCP tool runtimes, and a vector database for RAG.

Securing AI Agents on GKE: Where gVisor, Workload Identity, and VPC Service Controls Stop Working

You enable GKE Sandbox on a dedicated node pool, bind Workload Identity Federation to your AI agent pods, wrap your data services in a VPC Service Controls perimeter, and deploy your agents with the Agent Sandbox CRD using warm pools for sub-second startup. Your security posture dashboard shows every control configured and active. And then an attacker uses prompt injection to trick an agent into exfiltrating sensitive data through API calls that every single one of those layers explicitly allows.

4 Phases, 357 Crashes, 2 Bugs: What AFL++ Campaign Actually Looks Like

357 crash files. 2 real bug sites. That’s the outcome of this AFL++ campaign after roughly 8.5 billion executions across multiple harnesses, binaries, and phases. At first glance, everything looked like success. Crashes were increasing steadily. New inputs were being generated every few seconds. Coverage appeared to improve over time. From a surface-level perspective, the campaign looked productive. Then triage began.

Gemini XSS Vulnerability: When AI Executes Malicious Code

Artificial intelligence is no longer just generating text. It generates and executes code in real time. With tools like Google Gemini, features such as code canvases and live previews are turning AI systems into interactive execution environments. This shift introduces a new and rapidly growing category of risk: AI security vulnerabilities tied to real-time code execution.

AI Takes Over RSAC Conference (Now What?) with Dave Bittner

In this RSAC 2026 Conference recap, Dave Bittner, Host of the CyberWire Daily podcast, joins Data Security Decoded host Caleb Tolin from the guest seat to discuss the biggest theme dominating the conference: artificial intelligence, and, more specifically, agentic AI. From wall-to-wall AI messaging across San Francisco to in-depth conversations with security leaders and analysts, one thing became clear: the industry has moved past debating whether AI will take hold. It already has. Now, the focus has shifted to making it safe.