Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

DSPM, DLP, and AI Security: Why You Need All Three

Security budgets are tightening, and tool consolidation reviews keep landing on the same three categories: data security posture management (DSPM), data loss prevention (DLP), and AI security. At the same time, vendor marketing has done little to clarify the differences among the three and the path for organizations needing to enhance data security efficiently.

AWS Accelerator Program: How to Move to the Cloud Faster (and Cheaper)

Cloud migrations have a reputation for running over budget and behind schedule. That reputation isn’t entirely undeserved — migrations done without structure often do. But AWS has invested heavily in programs that give businesses a faster, cheaper path to the cloud, and most organizations don’t know they exist or how to access them. The AWS Accelerator Program is one of the more practical frameworks available for SMBs and mid-market companies planning a move to AWS.

Kubernetes for Agentic AI: Best Practices for Security and Observability

Agentic AI workloads are shipping to production on Kubernetes faster than the standards to secure them. Many teams deploying autonomous, tool-calling agents as containerized microservices do so without a shared baseline for securing or monitoring those containers. The CNCF AI Technical Community Group recently published a comprehensive article on cloud-native agentic standards, marking the first attempt to define best practices for such deployments.

AI-driven DAST for mobile apps: The next evolution of Dynamic Security Testing

“AI-powered DAST” is everywhere. It signals progress, but assumes something fundamental was missing. It wasn’t. DAST struggled not from lack of intelligence, but from lack of depth. Most tools never reached inside authenticated, stateful, multi-step journeys where real logic, sensitive data, and critical vulnerabilities exist. That’s the part Appknox solved years ago. AI here is not a reset. It is an accelerator, applied to a system already operating where risk actually lives.

Building AI Security with Our Customers: 5 Lessons from Evo's Design Partner Program

In 2025, we embarked on a new journey to secure the most important technology transformation of this decade – generative AI. Our vision is to help companies secure their AI fast, so that they can innovate on the cutting edge and put AI and agentic use cases into production. To do this, we built Evo, the world’s first agentic orchestrator for AI security. The foundation of any product is customer needs.

Open Banking API Security: The Complete Guide for 2026

Global Open banking API call volumes are set to cross the 720 billion mark by 2029, and attackers know it. With the global open banking market surging past $38 billion in 2025 itself and projected to exceed $115 billion by 2030, the financial data flowing through these APIs is highly lucrative for threat actors. With over 7.5 million calls made to just AI APIs, they have now graduated from a technical challenge to a business imperative.

RSA and DC Dispatches: Agentic AI Security Is the Story, Government Policy Needs to Catch Up

Fresh off two weeks of back-to-back meetings in Washington, DC, and on the floor/in the wings of the RSA Conference, one theme echoed through nearly every conversation I had with senior government officials and public policy leaders from global technology companies: agentic AI security is the defining emerging security challenge of this moment — and policy is not keeping pace.

Our ongoing commitment to privacy for the 1.1.1.1 public DNS resolver

Exactly 8 years ago today, we launched the 1.1.1.1 public DNS resolver, with the intention to build the world’s fastest resolver — and the most private one. We knew that trust is everything for a service that handles the "phonebook of the Internet." That’s why, at launch, we made a unique commitment to publicly confirm that we are doing what we said we would do with personal data.

What is an AI-BOM? Why Static Manifests Fall Short

Your AI-BOM shows every model, tool, and data source you deployed. But when your SOC investigates an alert about unusual agent behavior, that inventory tells them nothing about what actually happened at runtime. Static AI-BOMs document what you intended to run. Attackers exploit what your AI workloads actually do in production: which APIs they call, what data they touch, and how they use approved tools in unapproved ways.