Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Protect AI-power apps with Cloudflare Firewall for AI

As organizations refactor applications and adopt AI and Large Language Models (LLMs) to power new applications and enhance existing services, a new class of security vulnerabilities has emerged. Traditional web application firewalls (WAFs) are only partially equipped to defend against threats unique to AI. In this video, we provide an overview of Cloudflare's Firewall for AI product, how it works, and how you can use it to protect AI models and safeguard user interactions with those models.

Humans at the Center: Redefining the Role of Developers in an AI-Powered Future

In a previous blog, we discussed how AI is reshaping software development at every level. This shift means developers need new skills to stay effective. In fact, Gartner predicts that generative AI will require 80% of the engineering workforce to upskill through 2027. So what can today’s developers do to stay ahead? Here are a few steps to consider.

AI-Driven Cyber Defense in Action: How AI Agents Are Saving SOC Analysts From Burnout

AI-powered SOC platforms are revolutionizing cybersecurity by dramatically reducing false positives and enabling analysts to focus on high-value security work. In this episode of Data Security Decoded, join Caleb Tolin as he sits down with Grant Oviatt, Head of Security Operations at Prophet Security, to explore how AI agents are transforming security operations centers (SOCs) and reshaping the future of cyber defense.

Snyk for Government Achieves FedRAMP Moderate Authorization: A Milestone for Secure Government Software

Today marks a significant milestone for Snyk and, more importantly, for the security posture of the U.S. government. I'm thrilled to introduce Snyk for Government, our FedRAMP Moderate authorized solution for the public sector. This authorization underscores our unwavering commitment to providing secure development solutions that meet the rigorous standards of the Federal Risk and Authorization Management Program (FedRAMP). It means that U.S.

The Future of Developer Upskilling Is Human-Led, AI-Supported

In the last year, generative AI has dramatically accelerated how software is written. Developers can generate entire functions with a prompt, automate repetitive logic, and offload everything from boilerplate code to documentation. But with this newfound speed comes a deeper, more complex challenge: ensuring that what’s being created is secure, trustworthy, and production-ready.

Validating the Mission: Zenity Labs Research Cited in Gartner's AI Platform Analysis

Research is what turns cybersecurity from a reactive scramble into a proactive discipline. It’s how security teams uncover new threats, pressure-test defenses, and understand the unintended consequences of innovation (especially as AI Agents reshape the attack surface).At Zenity, research isn’t a side effort. It’s how we build, challenge, and ultimately secure what’s next.

Shadow AI: Managing the Security Risks of Unsanctioned AI Tools

The explosion of generative artificial intelligence tools is sparking a wave of enthusiasm in workplaces, with employees eagerly embracing new applications to boost productivity and innovation. However, this adoption often leads to a new phenomenon known as shadow AI—the use of artificial intelligence tools within an organization without explicit approval or oversight from IT and security teams. Unsanctioned use of AI creates significant (and often invisible) security blind spots.