Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Turn AI ambition into secure operations

If you attended AWS re:Invent last year, it probably felt like there was an AI solution for everything. Models, copilots, agents; by the end, someone had to pitch an AI solution to summarize all of the other AI solutions. This year, it may still feel like the AI announcements multiply faster than the models themselves. Under all of the hype, one message still resonates: AI innovation only works when it’s built on a secure foundation.

Key learnings from the 2025 State of Cloud Security study

We have just released the 2025 State of Cloud Security study, where we analyzed the security posture of thousands of organizations using AWS, Azure, and Google Cloud. In particular, we found that: In this post, we provide key recommendations based on these findings, and we explain how you can use Datadog Cloud Security to improve your security posture.

If AI Security were food...What's on the menu? #aisecurity #food

How do you explain AI Security without the jargon? Easy you make it food. In this video, we asked leading AI Security professionals to describe AI Security as a dish. Their answers turn complex ideas like prompt injection, data leaks, and model hardening into bite-sized insights you’ll actually remember. From layered lasagna to spicy tacos, each response brings a fresh perspective on what it means to build and protect secure AI systems.

The Agentic OODA Loop: How AI and Humans Learn to Defend Together

Last week at the AI Security Summit, something profound happened. The first cohort of AI Security Engineers in the world earned their certification — a milestone that symbolized not just new skills, but a new mindset. For decades, security has been about control. Rules, gates, and policies that define what’s safe and what’s not. But the age of Agentic AI — systems that perceive, reason, act, and learn — is forcing us to evolve beyond static defenses.

The Great Divide: Can the Desktop and Cloud Truly Coexist?

The cloud has transformed collaboration. Teams now share documents, slides, spreadsheets, and more in real time, without worrying about content silos or email chains (a hotspot for inefficient sharing). For most files, it’s seamless. But here’s the reality: Not all work lives comfortably in the cloud. Designers, video editors, engineers, and data scientists are just a few of the professionals who depend on desktop native apps to do their best work.

Uncovering the Shadow AI Paradox

Does the world really need another study of shadow AI? That was my first thought going into this project. Reading dozens of previous reports did not change that impression: there's a lot of shadow AI out there, and a lot of reports saying so. But the more I read, the more apparent it became that something important was missing. This endless supply was not meeting what was actually in demand.

Detectify AI-Researcher Alfred gets smarter with threat actor intelligence

Six months after launch, Alfred, the AI Agent that autonomously builds security tests, has revolutionized our workflow. Alfred has delivered over 450 validated tests against high-priority threats (average CVSS 8.5) with 70% requiring zero manual adjustment, allowing our human security researchers to concentrate on more complex, high-impact issues. Now, we’re elevating Alfred’s capabilities by integrating real-world threat actor intelligence directly into its core system.

AI Browsers Are Silently Exfiltrating Sensitive Data - and Legacy DLP Can't See It

A new class of AI-powered browsers are rewriting the rules of data security. While CISOs focus on traditional vectors, employees are unknowingly creating permanent backdoors to your most sensitive data through browsers that remember everything, sync everywhere, and share it all with AI models. The bottom line: If you're not actively protecting against AI browser exfiltration, you're already leaking data. Here's why it's happening, what it costs, and how to stop it today.