Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Runtime Observability for LangChain and AutoGPT on Kubernetes

A platform team at a mid-size SaaS company runs three LangChain agents and one AutoGPT-derived planner on EKS. LangSmith is wired in. OpenTelemetry traces flow into their observability stack. Falco runs on every node. The setup is what most security teams would consider thorough. A pip dependency in one of the agents’ tool packages ships a malicious update.

AI Inference Server Observability in Kubernetes: The Four Signals MLOps Tools Don't Capture

In August 2025, a vulnerability chain in NVIDIA Triton Inference Server was found that allowed an unauthenticated remote attacker to send a single crafted inference request, leak the name of an internal shared memory region, register that region for subsequent requests, gain read-write primitives into the Triton Python backend’s private memory, and achieve full remote code execution. The exploit chain ran entirely through Triton’s standard inference API. No anomalous traffic volume.

Runtime Observability for MCP Servers: A Security Guide

Your security team sees an MCP tool server throw an error. Your APM dashboard shows a latency spike. Your logs capture the JSON-RPC request with its method name and parameters. But none of that tells you whether the tool just read a harmless config file or dumped credentials to an external IP. Traditional observability tools—the APM platforms, the OpenTelemetry traces, the centralized logging pipelines—track performance across your Model Context Protocol deployments.

Observability is security (We just pretended it wasn't)

For years, we’ve drawn this artificial line that equates observability with uptime, performance, and SRE dashboards, while security is about threats, alerts, SIEMs, and “bad things.” While that separation was always convenient, it was never real. The same logs that tell you your service is slow are the same ones that tell you it’s compromised. We just routed them to different teams, different tools, and different budgets, then acted surprised when neither side had the full picture.

Logging Is Not Observability: The AI Security Gap MSSPs Can't Ignore

Every MSSP is fielding the same question from clients right now:"Are we safe with AI?" Most are answering with some version of"yes, we're logging everything." In a recent Defender Fridays episode, Saurabh Shintre, Founder and CEO of Realm Labs drew a hard line between these two concepts."You can log prompt and response and this bare minimum you have to do.

Agent-First Observability: Dynamic Data, High Cardinality, and the Business Impact

We want to transform how companies make decisions. That is not what you expect to hear from an observability company. Observability tools are supposed to help you monitor systems, debug incidents, and maybe reduce downtime. Useful, but not exactly the foundation for business decision making. So what does observability have to do with revenue, churn, or customer experience? More than you think, because observability already sits on top of the most important data in your business.

Observability and Security for the AI Era

Datadog has always been driven by a broader vision of helping teams understand and operate complex systems. In this session, you’ll hear from Yrieix Garnier, VP of Product, and Hugo Kaczmarek, Senior Director of Product, as they share the latest updates across the Datadog product suite and discuss how that vision continues to shape the platform’s evolution and support the next generation of AI-driven applications.

AI Usage Monitoring: Gaining Full Visibility Into GenAI Activity

Generative AI tools have entered the workplace through every possible channel. Employees use them to draft emails, summarize documents, and write code. This organic adoption creates a visibility gap for security and IT leaders. They must protect corporate data without blocking innovation. With these challenges in mind, this article explains how organizations can track GenAI use. To move from identifying risks to enabling secure adoption, it highlights practical steps to protect data while enabling productivity.

Runtime Observability for AI Agents: See What Your AI Actually Does

Last Tuesday, a platform security engineer at a mid-size fintech company ran a routine audit on their production Kubernetes clusters. The audit surfaced three LangChain-based agents, two vLLM inference servers, and a Model Context Protocol (MCP) tool runtime. None had been reported by the development teams. None appeared in any security inventory. All had been running for weeks. One of the agents had been making outbound API calls to a third-party data enrichment service every four minutes.

What Network Observability Reveals That Traditional Monitoring Misses

Modern enterprise networks have evolved into complex ecosystems that span multiple cloud environments, hybrid infrastructures, and countless interconnected devices. While traditional network monitoring has served organizations for decades, the increasing sophistication of cyber threats and the exponential growth in network traffic demand a more nuanced approach. Network observability emerges as the next evolution, providing unprecedented visibility into network behavior that traditional monitoring simply cannot match.