Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Datadog MCP Server, Experiments, Bits AI Security Analyst, and more | This Month in Datadog

April’s This Month in Datadog spotlights the Datadog MCP Server, which gives AI agents secure, real-time access to Datadog telemetry, and Datadog Experiments, which lets you design, launch, and analyze experiments to see the full impact of product changes on the user journey. Plus, we cover how to: Accelerate Cloud SIEM investigations with Bits AI Security Analyst Remediate vulnerabilities in your codebase with Bits AI Dev Agent for Code Security Explore Datadog with natural language using Bits Assistant.

How to investigate cloud credential compromise with Bits AI Security Analyst

Cloud environments create a flood of security signals, often reaching tens of thousands per day depending on the organization’s size. Security engineers and analysts spend a disproportionate share of their time triaging these signals instead of acting on legitimate threats. But the time-intensive parts of that work, such as identifying related signals and building a timeline, can be handled systematically, leaving teams free to focus on what actually requires human judgment.

Evaluate, optimize, and secure your Google Cloud AI stack with Datadog

As AI adoption accelerates on Google Cloud, the challenge for most teams today is no longer just building AI-powered applications. It’s also managing the full AI stack from end to end, including data pipelines, infrastructure, release process, and security operations. Many teams are monitoring these layers with different tools, creating complexity, fragmenting visibility, and slowing decisions on what to do next.

Spotting CI/CD misconfigurations before the bots do: Securing GitHub Actions with Datadog IaC Security

In March 2026, a GitHub account called hackerbot-claw, describing itself as an “autonomous security research agent powered by claude-opus-4-5,” began systematically targeting open source repositories—including one from Datadog. Over a week, it opened many pull requests designed to exploit misconfigurations in GitHub Actions workflows.

Detect runtime threats in Python Lambda functions with Datadog AAP

Python AWS Lambda functions are ephemeral and highly distributed, which creates security visibility gaps that traditional perimeter defenses and proxy-based controls struggle to fill. Techniques such as credential stuffing, SQL injection, and server-side request forgery (SSRF) can look like legitimate application traffic, making them difficult to identify without visibility inside the application itself.

Observability and Security for the AI Era

Datadog has always been driven by a broader vision of helping teams understand and operate complex systems. In this session, you’ll hear from Yanbing Li, Chief Product Officer, and Shri Subramanian, Group Product Manager, as they share the latest updates across the Datadog product suite and discuss how that vision continues to shape the platform’s evolution and support the next generation of AI-driven applications.

Introducing our open source AI-native SAST

Static application security testing (SAST) tools help developers quickly catch potential vulnerabilities as they code. However, these tools rely on inflexible rules that often generate a high number of false positives, reducing trust in their accuracy and slowing adoption. To help developers access context-aware vulnerability detection, we’ve released an open source AI-native SAST solution. This tool scans code changes incrementally and surfaces security issues in real time.

CI/CD security: threat modeling using a MITRE-style threat matrix

Source code management (SCM) and CI/CD pipelines have become the industry standard for automating software delivery. But from the time a code change enters your SCM until it’s deployed, it’s susceptible to changes and reconfigurations that can go so far as to modify the pipeline itself. If you’re not proactively securing your CI/CD system, attackers can use it to grant themselves permissions, access secrets, and ship malicious code.

CI/CD security: How to secure your GitHub ecosystem

In Part 1 of this series, we discussed the CI/CD security boundary, mapped out potential attack vectors with a CI/CD threat matrix, and introduced a simple threat model focused on ideating detection workflows. In this post, we’ll apply these principles to a real-world source code management (SCM) tool example that every developer is familiar with: GitHub. In addition to threat modeling, we’ll also be taking a closer look at historical attacks on GitHub and GitHub Actions ecosystems.

Introducing the Datadog Code Security MCP

AI-assisted development helps teams write code faster, but that speed comes with added security risk. As agents generate more code, they can introduce vulnerabilities, insecure dependencies, or exposed secrets, often before a human reviewer ever sees the change. Security teams are left reviewing more code with the same resources, which makes it harder to catch issues early.