Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Why we can't have nice things! ...Or can we?

On 7th April 2026, Anthropic published a system card for an AI model we may never be allowed to use: Claude Mythos. This preview demonstrated a significant leap in capability over Anthropic’s previous Claude Model (Opus 4.6), and their Responsible Scaling Policy (RSP) v3.1 led to them making the decision to withhold it from general availability, serving as a "defensive only" asset.

5 Themes From a Candid Discussion

The Eskenzi IT Security Analyst & CISO Forum wasn’t a typical security event. This forum was a gathering of CISOs, analysts, and security leaders speaking candidly under Chatham House Rule about what’s actually breaking, what’s working, and where things are heading. Here are 5 key themes that came through loud and clear. None of them were surprising. But together, they paint a pretty stark picture of where security and AI are right now.

A Critical Look at OpenClaw and NemoClaw

Surprise, surprise, agentic AI is advancing very quickly, and security isn’t quite keeping up. While most attention in recent times has focused on improving model capability, we’ve often been left wondering how to actually make these systems safe enough to trust with real-world tasks and limited interaction. This challenge has become particularly evident with the rise of platforms like OpenClaw, where autonomous agents can execute multi-step actions with minimal human oversight.

AI Adoption Surging in Financial Services - But Control Lagging

Artificial intelligence is moving rapidly from experimentation into everyday use across financial services. From client servicing and research to operations and risk analysis, AI is increasingly embedded in core workflows. This shift is widely recognised within the industry. Recent research indicates that 67% of financial services organisations report rapid AI adoption, with 93% ranking AI as a top security priority heading into 2026. At the same time, governance structures are being established.

RSA 2026: The Shift Toward Security FOR AI

RSA Conference 2026 made one thing clear very quickly. Security leaders are done with generic AI pitches. After two years of relentless “AI everything,” the market is now pushing back. There is a growing fatigue with vague promises, surface-level features, and what many are calling outright AI washing. The result is a trust gap. What cut through this year was not another AI-powered detection claim. It was a much more grounded question.

The AI Control Gap: Why Partners Are Now on the Front Line

For channel partners, AI has quickly moved from a future conversation to a current customer problem. Clients are already using AI across their organisations, often faster than governance can keep up. What’s emerging is not just another technology trend, but a new class of risk that customers cannot fully see or control. Our latest research, based on insights from senior security leaders in highly regulated industries, highlights the scale of the issue.

6 Strategic Implications of AI for Security Leaders in 2026

There is a structural shift happening in enterprise environments that most security leaders recognise, but few have fully adapted to. AI is now embedded, decentralised, and operating across core workflows. At the same time, governance models are still largely built on assumptions that no longer hold: that tools are known, data flows are observable, and behaviour follows policy. The result is a widening gap between perceived control and operational reality.

Why Legal AI Governance Must Operate at the Point of Use

A recent report of a solicitor facing regulatory investigation after uploading client documents into ChatGPT is not an isolated incident. It is a visible symptom of a broader structural issue unfolding across highly regulated industries. Legal professionals operate under strict duties of confidentiality, and yet the tools reshaping their workflows are being adopted faster than governance and operational controls can keep pace. The challenge is not whether AI should be used in legal practice.

5 AI Myths Exposing the Governance Gap

AI adoption isn’t slowing down. It’s accelerating, quietly, unevenly, and often outside formal control. To separate assumption from reality, CultureAI commissioned an independent research study of 300 senior technology, security, and risk leaders across North America and Europe. Respondents included CISOs, CIOs, CTOs, Data Protection Officers, and senior IT and security leaders across finance, healthcare, technology, legal, and professional services.

A January Snapshot: Real-World AI Usage

AI is no longer a fringe productivity experiment inside organisations, it is embedded, habitual, and increasingly invisible. This snapshot from CultureAI’s January usage data highlights how AI is actually being used across everyday workflows, and where risk is forming as a result. Rather than focusing on hypothetical threats or model-level concerns, the findings below surface behavioural signals from real interactions: prompts, file uploads, and context accumulation.