Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Secure private networking for everyone: users, nodes, agents, Workers - introducing Cloudflare Mesh

AI agents have changed how teams think about private network access. Your coding agent needs to query a staging database. Your production agent needs to call an internal API. Your personal AI assistant needs to reach a service running on your home network. The clients are no longer just humans or services. They're agents, running autonomously, making requests you didn't explicitly approve, against infrastructure you need to keep secure.

Securing non-human identities: automated revocation, OAuth, and scoped permissions

Agents let you build software faster than ever, but securing your environment and the code you write — from both mistakes and malice — takes real effort. Open Web Application Security Project (OWASP) details a number of risks present in agentic AI systems, including the risk of credential leaks, user impersonation, and elevation of privilege.

Managed OAuth for Access: make internal apps agent-ready in one click

We have thousands of internal apps at Cloudflare. Some are things we’ve built ourselves, others are self-hosted instances of software built by others. They range from business-critical apps nearly every person uses, to side projects and prototypes. All of these apps are protected by Cloudflare Access. But when we started using and building agents — particularly for uses beyond writing code — we hit a wall. People could access apps behind Access, but their agents couldn’t.

Why Automotive & Manufacturing Can't Afford to Delay Key Management Strategy

In automotive and manufacturing, digital transformation is no longer a future ambition—it’s operational reality. Connected vehicles, smart factories, and increasingly complex supply chains have introduced a new dependency: trusted device identity and secure key management at scale. And yet, many organisations are still: This gap is no longer just a technical issue—it’s a business risk.

Why Securing AI Code Generation is Critical for AppSec

The revolution is here, but it’s not what we expected. AI coding assistants have transformed software development, with developers shipping code faster than ever before. GitHub Copilot, Amazon CodeWhisperer, and Claude Code have become as essential to modern development as Git itself. The productivity gains are undeniable; what once took hours now takes minutes. But there’s a dangerous blind spot in this revolution: security.

Beyond the Fence: Securing Our Skies from the Drone Threat

For decades, security leaders have optimized defenses in two dimensions. Doors, locks, fences, cameras, access badges, identity systems, and multi-factor authentication have all been designed to control who and what moves through physical and digital perimeters. But as experts discussed during RSAC 2026, something fundamental has changed: the threat landscape has gone airborne.

Claude Mythos Changed Everything. Your APIs Are the First Target.

Anthropic just released Claude Mythos Preview. They did not make it publicly available. That decision alone should tell you everything you need to know about what this model can do. During internal testing, Mythos autonomously discovered and exploited zero-day vulnerabilities across every major operating system and web browser. It found a 27-year-old bug in OpenBSD. A 16-year-old vulnerability in a widely used media codec.

Detect runtime threats in Python Lambda functions with Datadog AAP

Python AWS Lambda functions are ephemeral and highly distributed, which creates security visibility gaps that traditional perimeter defenses and proxy-based controls struggle to fill. Techniques such as credential stuffing, SQL injection, and server-side request forgery (SSRF) can look like legitimate application traffic, making them difficult to identify without visibility inside the application itself.

What AI Operator-First SOC Looks Like, and Why It Matters Now

There is a version of AI SOC that most security teams are familiar with. It summarizes alerts. It surfaces recommendations. It tells an analyst what to look at next. It is useful in the way a well-organized report is useful: it saves time reading, but the work still happens at a human pace. That version of AI is not what this blog is about. For MSSPs and SecOps teams operating at scale, advisory AI is not a destination. In fact, it presents a bottleneck in a different form.