Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI Agents Are The New Detection Problem Nobody Designed For

AI agents now operate as core identities in enterprise environments, authenticating, accessing data, and executing workflows at machine speed. Their flexibility and scale introduce a detection challenge traditional security models were never built to solve. Exabeam has seen this pattern before with insider threat and workload identities. AI agents accelerate the need for identity-centric detection.

International AI Safety Report 2026: What It Means for Autonomous AI Systems

The International AI Safety Report 2026 is one of the most comprehensive overviews to date of the risks posed by general-purpose AI systems. It’s compiled by over 100 independent experts from more than 30 countries, and shows that while AI systems are performing at levels that seemed like science fiction only a few years ago, the risks of misuse, malfunction, and systematic and cross-border harms are clear. It makes a compelling case for better evaluation, transparency, and guardrails.

The 2026 Forecast for AI-Driven Threats

2025 changed the shape of digital risk. In 2026, the impact accelerates. The fastest-growing threats no longer look like traditional attacks. They arrive through apparently legitimate automated access – AI agents, LLM crawlers, and delegated automation interacting directly with revenue-critical systems. They don’t trigger alarms. They quietly extract value, distort pricing logic, and reshape digital economics at scale.

I Built a Production-Ready App in 20 Minutes with Claude Opus 4.6

My boss dropped a bombshell at 4:00 PM: build a secure, production-ready app from scratch by tomorrow morning. Instead of panicking, I put Claude Opus 4.6 to the test. In this video, I walk you through the entire end-to-end process of using an AI agent to architect, code, and debug a full-stack application. We’ll look at "Plan Mode," how the AI handles environment errors (like Windows SQLite issues), and most importantly, how we verified the AI's code for security vulnerabilities using Snyk.

AI Security in 2026 Starts With Identity #cybersecurity #datasecurity #identitysecurity

As AI adoption grows, identity risk grows with it. Dirk Schrader, VP of Security Research at Netwrix, explains why governing human and machine identities is foundational to securing AI systems. How are you governing identity in your AI workflows today?

Intelligent AI Routing Rules That Pick the Cheapest Model That Still Meets Quality (with Practical Examples)

Most teams do one of two things with LLMs: they pick one "safe" premium model and accept the bill, or they swap models by hand and hope nothing breaks. Both approaches get old fast when traffic grows, prices change, or one provider has a rough day. Intelligent routing rules fix that by making model choice automatic. Instead of "always use Model X," you set constraints like price, latency budget, context window, and a minimum quality bar. Each request gets the cheapest model that can still do the job, and it escalates only when it needs to.

LLM Application for Protegrity AI Developer Edition

Securing LLM Workflows with Protegrity AI Developer Edition Learn how to protect sensitive data and prevent malicious prompt injections in your AI applications. In this technical walkthrough, Dan Johnson, Software Engineer at Protegrity, demonstrates a dual-gate security architecture designed to safeguard Large Language Models. Discover how to implement a security gateway that sits between your users and your LLM. This demonstration covers the integration of semantic guardrails and classification APIs to ensure data privacy and system integrity.

Jupyter Notebook for Protegrity AI Developer Edition

Want to test Protegrity’s data protection features without any local installation? In this tutorial, Dan Johnson shows you how to make your first protect and unprotect API calls directly in your browser using our interactive Jupyter Notebook (Binder). This is the fastest way to see Protegrity’s Python SDK in action—authenticating, applying protection policies, and maintaining data utility in real-time.

Clawing For Scraps: Risks of OpenClaw AKA ClawdBot

The world of AI is still advancing rapidly, but so are the threats. Wherever you get your news, Clawdbot, or is it Moltbot, or is it now called OpenClaw(?) is everywhere lately. You can’t avoid talk of this AI personal assistant. It’s actually now called OpenClaw after some naming drama, and at the time of writing has 166k followers on GitHub. The repository also has an alarming number of forks, issues, and pull requests.

Privacy in the AI Age: What's Really Changing in 2026 (with Cloudflare's CPO)

In this episode of This Week in NET, host João Tomé is joined by Emily Hancock, Cloudflare’s Chief Privacy Officer and Data Protection Officer, for a wide-ranging conversation about privacy in 2026 and how the role has evolved in the age of AI.