Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI Workload Security on GKE: Evaluating Google Cloud Native vs Third-Party Solutions

A CISO running AI agents on GKE has watched three Google product launches in eighteen months — Model Armor, expanded Security Command Center coverage for AI workloads, additions to Chronicle’s curated detection content — and is being asked whether the GCP-native stack is now sufficient. The vendor demos and the Google Cloud blog say yes. The 2 AM analyst experience says something different.

How Financial Services Teams Should Secure AI Agents in 2026

Your fraud detection agent scores 30,000 transactions per hour. Your KYC agent processes identity verifications against government watchlists. Your customer service chatbot resolves disputes and initiates balance transfers. Each agent runs on Kubernetes with inherited service account permissions that span payment APIs, customer databases, and compliance systems. Now imagine one of those agents is compromised through a prompt injection embedded in a customer support ticket.

CVE-2026-0968: The libssh Heap Read That Isn't as Scary as Scanners Say

A missing null check in libssh’s SFTP directory listing code lets a malicious server crash clients, but real-world exploitability is extremely constrained. CVE-2026-0968 is an out-of-bounds heap read in sftp_parse_longname(), triggered when an SFTP client processes a crafted SSH_FXP_NAME response with a malformed longname field. Red Hat, which serves as the CNA (CVE Numbering Authority) for this vulnerability, scored it 3.1 (Low), while Amazon Linux independently scored it 4.2 (Medium).

AI Workload Baseline and Drift Detection: Defining "Normal" Agent Behavior

Security teams deploying AI agents into Kubernetes know they need behavioral baselines. The concept is straightforward: define what “normal” looks like for each agent, then detect when behavior drifts in ways that suggest compromise. The problem is that AI agents are designed to change. A model update alters inference latency. A prompt revision shifts tool-calling sequences. A new MCP integration adds API destinations nobody flagged during the last security review.

How to Triage an AI Agent Execution Graph: A Three-Tier Decision Framework for Security Teams

A platform security engineer gets an alert at 2:14 a.m. One of the LangChain agents running in their production Kubernetes cluster has produced an execution graph with eleven nodes, seven tool calls, and an egress edge to a domain that is not in the agent’s approved integration list. The chain is fully rendered in their console. Every signal is there.

The CISO's AI Agent Production Approval Checklist: 7 Gates to Clear Before Go-Live

Your engineering lead is in your office Thursday morning. They want to push an AI agent to production next Tuesday. It’s a LangChain-based workflow agent, connected through MCP to three internal tools and one external API, with access to a customer database. The framework posters are on the wall. Your team has spent two quarters standing up runtime observability. And sitting in that chair, you still don’t know whether to say yes.

A CISO's Guide to Deploying AI Agents in Production Safely

Your CNAPP shows green across every posture check—hardened clusters, compliant configurations, no critical CVEs—but when your board asks "Are our AI agents safe in production?", you cannot answer with confidence because your tools see the infrastructure, not what the agents actually do at runtime.

Detecting Rogue AI Agents: Tool Misuse and API Abuse at Runtime

When your CNAPP flags a suspicious dependency in an AI agent container, your WAF logs an unusual API spike, and your SIEM shows a burst of cloud storage calls—are those three separate incidents or one rogue agent attack? Most security teams treat them as three tickets in three queues, investigated by three people who may never connect the dots. By the time someone pieces together that a single compromised agent drove all three signals, the attacker has already moved laterally and exfiltrated data.

What is an AI-BOM? Why Static Manifests Fall Short

Your AI-BOM shows every model, tool, and data source you deployed. But when your SOC investigates an alert about unusual agent behavior, that inventory tells them nothing about what actually happened at runtime. Static AI-BOMs document what you intended to run. Attackers exploit what your AI workloads actually do in production: which APIs they call, what data they touch, and how they use approved tools in unapproved ways.

eBPF for AI Agent Enforcement: What Kernel-Level Security Catches (and What It Misses)

Your team deployed Tetragon six months ago. TracingPolicies are humming along—you’re catching unauthorized binary executions, blocking suspicious network connections, and generating seccomp profiles from observed behavior. Runtime security for your traditional workloads is solid. Then engineering ships their first autonomous AI agent into production. A LangChain agent connected to internal databases, external APIs through MCP tool runtimes, and a vector database for RAG.