Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Abusing supply chains: How poisoned models, data, and third-party libraries compromise AI systems

The AI ecosystem is rapidly changing, and with this growth comes unique challenges in securing the infrastructure and services that support it. In Part 1 of this series, we explored how attackers target the underlying resources that host and run AI applications, such as cloud infrastructure and storage. In this post, we'll look at threats that affect AI-specific resources in supply chains, which are the software and data artifacts that determine how an AI service operates.

Abusing AI interfaces: How prompt-level attacks exploit LLM applications

In Parts 1 and 2 of this series, we looked at how attackers get access to and take advantage of the infrastructure and supply chains that shape generative AI applications. In this post, we'll discuss AI interfaces, which we define as the entry points and logic that determine how a user interacts with an AI application. These elements can include chat interfaces, such as AI assistants, and API endpoints for supporting services.

Abusing AI infrastructure: How mismanaged credentials and resources expose LLM applications

The swift adoption of generative AI (GenAI) by the software industry has introduced a new area of focus for security engineers: threats targeting the various components of their AI applications. Understanding how these areas are vulnerable to attacks will become increasingly significant as the space evolves. In this series, we'll look at common threats that target the following components of AI applications.

Monitor and optimize payment processing with Datadog's Adyen integration

Adyen is a global payment platform that supports transactions across web, mobile, and in-person channels. By consolidating payment flows into a single process, the platform helps merchants simplify operations and deliver consistent purchasing experiences. But payment processes are complex, often involving multiple steps that include authorization, capture, and refunds.

PII Exposed in Your Logs? Fix It Fast With Observability Pipelines

Help keep your logs secure before they leave your environment. In this video, we’ll show you how to use Datadog Observability Pipelines to easily discover, classify, and mange sensitive information—like PCI, PII, or custom patterns—from your logs on-premise to support compliance needs. You’ll learn how to: Whether you’re in DevOps, Security, or Compliance, this workflow helps support your data privacy initiatives without disrupting your existing logging setup.

Identify common security risks in MCP servers

AI adoption is rapidly increasing, and with that comes a steady influx of useful but potentially vulnerable tools and services still maturing in the AI space. The Model Context Protocol (MCP) is one example of new AI tooling, providing a framework for how applications integrate with and supply context to large language models (LLMs). MCP servers are central to developing AI assistants and workflows that are deeply integrated with your environment.

Bits AI Security Analyst: Automate Cloud SIEM investigations

Datadog's Bits AI Security Analyst transforms the way security teams handle investigations by autonomously triaging Datadog Cloud SIEM signals. Built natively in Datadog, it conducts in-depth investigations of potential threats and delivers clear, actionable recommendations. With context-rich guidance for mitigation, security teams can stay ahead of evolving threats with greater efficiency and precision.

Elevate web security and mitigate third-party risk with Reflectiz in the Datadog Marketplace

Modern websites have become increasingly reliant on third-party applications and open source tools to deliver functionality and enhance the user experience. However, this reliance introduces both security and privacy risks, as external code can act as a vector for sophisticated attacks, such as Magecart and web skimming. Without visibility into these apps and tools, organizations are left vulnerable to undetected threats, unauthorized data access, and regulatory violations.

Navigating Identity and Security in the Age of Agentic AI

As AI agents rapidly improve, becoming more autonomous and interconnected, they unlock new ways to assist us. But as they perform actions for us and delegate tasks to other AI agents, we need to reexamine our understanding of “identity.” How do we ensure these powerful AI interactions are authentic, authorized, and permissioned, while differentiating between legitimate actions and potential misuse?Join Datadog co-founder and CTO Alexis Lê-Quôc and Okta CTO Bhawna Singh as they explore the convergence of AI, security, and observability.