Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

GitProtect Report: DevOps Incidents Rise by 21%, While Impact Hours Double to 9,255

With 607 recorded incidents, DevOps platforms experienced a 21% year-over-year increase, while total disruption time nearly doubled to 9,255 hours in 2025. This marks a clear rise in both the frequency and severity of outages compared to the previous year, according to the latest GitProtect Report.

What Real AI Security Incidents Reveal About Today's Risks

Mend.io, formerly known as Whitesource, has over a decade of experience helping global organizations build world-class AppSec programs that reduce risk and accelerate development -– using tools built into the technologies that software and security teams already love. Our automated technology protects organizations from supply chain and malicious package attacks, vulnerabilities in open source and custom code, and open-source license risks.

AI-SPM for Financial Services: Managing AI Risk Under SOC2, PCI-DSS, and MAS TRM

The external auditor’s evidence request lands Tuesday morning. A security architect at a Tier 1 bank pulls up her AI-SPM dashboard for the SOC2 Type 2 review. Eighty-three AI agents running across the bank’s clusters. For each one, the dashboard shows the current configuration and the current behavioral baseline. The data is accurate, comprehensive, and point-in-time.

Prompt and Tool Call Visibility: What Your AI Agents Are Actually Doing

It is 11:47 p.m. and the on-call security engineer is staring at two dashboards. On the left, LangSmith — the ML team’s debugging stack — showing the agent’s prompts, model responses, tool calls, and tokens consumed. On the right, the runtime detection console showing eBPF-captured syscalls, network connections, and process trees from the same Pod. Both are populated.

Longhorn on Production Clusters: Storage Configuration, Tuning, and Gotchas

Longhorn is a lightweight, distributed block storage system built specifically for Kubernetes. It runs entirely inside your cluster, turning local disks on worker nodes into replicated persistent volumes with no external storage array required. That simplicity is what makes it appealing, especially in the Rancher and SUSE ecosystem where it ships as the default storage option. You get persistent storage that is easy to install, easy to understand, and tightly integrated with the Kubernetes lifecycle.

Todd's Tenth Rule of certificate automation

I’m an old engineer at heart. Many of my ideals were formed by Joel’s Things You Should Never Do, Fred’s No Silver Bullet, and Brian’s Big Ball of Mud. One of my favorites was Greenspun’s Tenth Rule: The joke isn’t really about programming languages. It’s about a pattern: certain problems have a shape, and no matter how you approach them, you end up building the same solution, in the same order, until you arrive at the same messy place.

Runtime Observability for LangChain and AutoGPT on Kubernetes

A platform team at a mid-size SaaS company runs three LangChain agents and one AutoGPT-derived planner on EKS. LangSmith is wired in. OpenTelemetry traces flow into their observability stack. Falco runs on every node. The setup is what most security teams would consider thorough. A pip dependency in one of the agents’ tool packages ships a malicious update.

AI Inference Server Observability in Kubernetes: The Four Signals MLOps Tools Don't Capture

In August 2025, a vulnerability chain in NVIDIA Triton Inference Server was found that allowed an unauthenticated remote attacker to send a single crafted inference request, leak the name of an internal shared memory region, register that region for subsequent requests, gain read-write primitives into the Triton Python backend’s private memory, and achieve full remote code execution. The exploit chain ran entirely through Triton’s standard inference API. No anomalous traffic volume.

Runtime Observability for MCP Servers: A Security Guide

Your security team sees an MCP tool server throw an error. Your APM dashboard shows a latency spike. Your logs capture the JSON-RPC request with its method name and parameters. But none of that tells you whether the tool just read a harmless config file or dumped credentials to an external IP. Traditional observability tools—the APM platforms, the OpenTelemetry traces, the centralized logging pipelines—track performance across your Model Context Protocol deployments.