Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Why we can't have nice things! ...Or can we?

On 7th April 2026, Anthropic published a system card for an AI model we may never be allowed to use: Claude Mythos. This preview demonstrated a significant leap in capability over Anthropic’s previous Claude Model (Opus 4.6), and their Responsible Scaling Policy (RSP) v3.1 led to them making the decision to withhold it from general availability, serving as a "defensive only" asset.

5 Themes From a Candid Discussion

The Eskenzi IT Security Analyst & CISO Forum wasn’t a typical security event. This forum was a gathering of CISOs, analysts, and security leaders speaking candidly under Chatham House Rule about what’s actually breaking, what’s working, and where things are heading. Here are 5 key themes that came through loud and clear. None of them were surprising. But together, they paint a pretty stark picture of where security and AI are right now.

A Critical Look at OpenClaw and NemoClaw

Surprise, surprise, agentic AI is advancing very quickly, and security isn’t quite keeping up. While most attention in recent times has focused on improving model capability, we’ve often been left wondering how to actually make these systems safe enough to trust with real-world tasks and limited interaction. This challenge has become particularly evident with the rise of platforms like OpenClaw, where autonomous agents can execute multi-step actions with minimal human oversight.