Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

An early look at cryptographic watermarks for AI-generated content

Generative AI is reshaping many aspects of our lives, from how we work and learn, to how we play and interact. Given that it's Security Week, it's a good time to think about some of the unintended consequences of this information revolution and the role that we play in bringing them about.

Take control of public AI application security with Cloudflare's Firewall for AI

Imagine building an LLM-powered assistant trained on your developer documentation and some internal guides to quickly help customers, reduce support workload, and improve user experience. Sounds great, right? But what if sensitive data, such as employee details or internal discussions, is included in the data used to train the LLM?

Cloudflare for AI: supporting AI adoption at scale with a security-first approach

AI is transforming businesses — from automated agents performing background workflows, to improved search, to easier access and summarization of knowledge. While we are still early in what is likely going to be a substantial shift in how the world operates, two things are clear: the Internet, and how we interact with it, will change, and the boundaries of security and data privacy have never been more difficult to trace, making security an important topic in this shift.

Data Leaks and AI Agents: Why Your APIs Could Be Exposing Sensitive Information

Most organizations are using AI in some way today, whether they know it or not. Some are merely beginning to experiment with it, using tools like chatbots. Others, however, have integrated agentic AI directly into their business procedures and APIs. While both types of organizations are undoubtedly realizing remarkable productivity and efficiency benefits, they may not know they are putting themselves at a significant security risk.

A litmus test for AI agents

What is an ”AI agent”? Confusion abounds. There is also some consensus: agents must of course be AI-driven systems. They should have some degree of autonomy, and they should be able to use tools in addition to understanding and reasoning. But why isn't, say, ChatGPT an agent? According to most definitions out there, it actually is. Yet most (including OpenAI themselves) don’t describe it that way.

Understanding and Securing Exposed Ollama Instances

Ollama is an emerging open-source framework designed to run large language models (LLMs) locally. While it provides a flexible and efficient way to serve AI models, improper configurations can introduce serious security risks. Many organizations unknowingly expose Ollama instances to the internet, leaving them vulnerable to unauthorized access, data exfiltration, and adversarial manipulation.

Panel Discussion - The Evolving Threat Landscape: Risks in the Age of AI Disruption | DevSecNext

As AI reshapes industries, it also introduces a wave of emerging security risks—some known, others yet to be discovered. In this DevSecNext panel discussion, experts from engineering, cloud business, venture capital, and security innovation dive deep into the intersection of AI disruption and the evolving threat landscape. This talk was recorded at DevSecNext, a community-driven event reimagining how we share security insights—short, to the point, and packed with actionable takeaways.