Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How to monitor MCP server activity for security risks

The Model Context Protocol (MCP) is a popular framework for connecting AI agents to data sources, such as APIs and databases. Because this technology is still new and evolving, its security standards are also in the early stages. This means that MCP servers are susceptible to misuse, so teams building and running them internally need visibility into server interactions to keep their environments safe from attacks.

Monitor Falco with Datadog

Organizations running containerized environments face complex security challenges as they scale Kubernetes and adopt dynamic, ephemeral infrastructure. Traditional security tools often miss activity inside containers, making it difficult to detect policy violations or threats at runtime. Falco is a runtime security monitoring tool for containerized infrastructure.

Using LLMs to filter out false positives from static code analysis

Static application security testing (SAST) is foundational to modern application and code security programs. Yet these tools inevitably produce false positives that require manual review. When scanners find vulnerabilities that are not genuine issues, they erode trust, slow down remediation, and make it harder for teams to understand which alerts require attention.

LLM guardrails: Best practices for deploying LLM apps securely

Prompt guardrails are a common first line of defense against client-level LLM application attacks, such as prompt injection and context poisoning. They’re also a critical component of a full defense-in-depth strategy for LLM security at the infrastructure, supply chain, and application level. The specific guardrails that teams implement depend highly on use case, but they are typically designed to.

Monitor OCI Audit Logs with Datadog Cloud SIEM

Oracle Cloud Infrastructure (OCI) provides compute, storage, networking, and database services for running enterprise applications and workloads in Oracle. OCI supports both traditional and cloud-native applications, offering scalable, secure, and high-performance infrastructure for hybrid and multi-cloud environments. Securing workloads in OCI can be complex for organizations managing a mix of on-prem, hybrid, and cloud environments.

Datadog achieves IRAP's PROTECTED status in Australia

As Australian government agencies and regulated industries move sensitive workloads to the cloud, they need observability solutions that meet highly stringent data protection standards. To address this need, Datadog has pursued and received an Infosec Registered Assessors Program (IRAP) assessment at the PROTECTED level. This is an advanced classification under the Australian Cyber Security Centre (ACSC) framework for cloud and SaaS security.

Aligning SRE and security for better incident response

In this series, we looked at why we combined our SRE and security teams into one cohesive group, and how we made that happen. With this combined approach, we set out to build our internal platform and customer-facing products with a security-first mindset, while still drawing upon the deep expertise of our existing SRE practices. Combining the teams improved the way we build tools for both our engineers and customers and strengthened our ability to mitigate risks.

Abusing supply chains: How poisoned models, data, and third-party libraries compromise AI systems

The AI ecosystem is rapidly changing, and with this growth comes unique challenges in securing the infrastructure and services that support it. In Part 1 of this series, we explored how attackers target the underlying resources that host and run AI applications, such as cloud infrastructure and storage. In this post, we'll look at threats that affect AI-specific resources in supply chains, which are the software and data artifacts that determine how an AI service operates.

Abusing AI interfaces: How prompt-level attacks exploit LLM applications

In Parts 1 and 2 of this series, we looked at how attackers get access to and take advantage of the infrastructure and supply chains that shape generative AI applications. In this post, we'll discuss AI interfaces, which we define as the entry points and logic that determine how a user interacts with an AI application. These elements can include chat interfaces, such as AI assistants, and API endpoints for supporting services.

Abusing AI infrastructure: How mismanaged credentials and resources expose LLM applications

The swift adoption of generative AI (GenAI) by the software industry has introduced a new area of focus for security engineers: threats targeting the various components of their AI applications. Understanding how these areas are vulnerable to attacks will become increasingly significant as the space evolves. In this series, we'll look at common threats that target the following components of AI applications.