Manchester, UK
2015
  |  By CultureAI Team
On 7th April 2026, Anthropic published a system card for an AI model we may never be allowed to use: Claude Mythos. This preview demonstrated a significant leap in capability over Anthropic’s previous Claude Model (Opus 4.6), and their Responsible Scaling Policy (RSP) v3.1 led to them making the decision to withhold it from general availability, serving as a "defensive only" asset.
  |  By CultureAI Team
The Eskenzi IT Security Analyst & CISO Forum wasn’t a typical security event. This forum was a gathering of CISOs, analysts, and security leaders speaking candidly under Chatham House Rule about what’s actually breaking, what’s working, and where things are heading. Here are 5 key themes that came through loud and clear. None of them were surprising. But together, they paint a pretty stark picture of where security and AI are right now.
  |  By CultureAI Team
Surprise, surprise, agentic AI is advancing very quickly, and security isn’t quite keeping up. While most attention in recent times has focused on improving model capability, we’ve often been left wondering how to actually make these systems safe enough to trust with real-world tasks and limited interaction. This challenge has become particularly evident with the rise of platforms like OpenClaw, where autonomous agents can execute multi-step actions with minimal human oversight.
  |  By CultureAI Team
Artificial intelligence is moving rapidly from experimentation into everyday use across financial services. From client servicing and research to operations and risk analysis, AI is increasingly embedded in core workflows. This shift is widely recognised within the industry. Recent research indicates that 67% of financial services organisations report rapid AI adoption, with 93% ranking AI as a top security priority heading into 2026. At the same time, governance structures are being established.
  |  By CultureAI Team
RSA Conference 2026 made one thing clear very quickly. Security leaders are done with generic AI pitches. After two years of relentless “AI everything,” the market is now pushing back. There is a growing fatigue with vague promises, surface-level features, and what many are calling outright AI washing. The result is a trust gap. What cut through this year was not another AI-powered detection claim. It was a much more grounded question.
  |  By CultureAI Team
For channel partners, AI has quickly moved from a future conversation to a current customer problem. Clients are already using AI across their organisations, often faster than governance can keep up. What’s emerging is not just another technology trend, but a new class of risk that customers cannot fully see or control. Our latest research, based on insights from senior security leaders in highly regulated industries, highlights the scale of the issue.
  |  By CultureAI Team
There is a structural shift happening in enterprise environments that most security leaders recognise, but few have fully adapted to. AI is now embedded, decentralised, and operating across core workflows. At the same time, governance models are still largely built on assumptions that no longer hold: that tools are known, data flows are observable, and behaviour follows policy. The result is a widening gap between perceived control and operational reality.
  |  By CultureAI Team
A recent report of a solicitor facing regulatory investigation after uploading client documents into ChatGPT is not an isolated incident. It is a visible symptom of a broader structural issue unfolding across highly regulated industries. Legal professionals operate under strict duties of confidentiality, and yet the tools reshaping their workflows are being adopted faster than governance and operational controls can keep pace. The challenge is not whether AI should be used in legal practice.
  |  By Ria Manzanero
AI adoption isn’t slowing down. It’s accelerating, quietly, unevenly, and often outside formal control. To separate assumption from reality, CultureAI commissioned an independent research study of 300 senior technology, security, and risk leaders across North America and Europe. Respondents included CISOs, CIOs, CTOs, Data Protection Officers, and senior IT and security leaders across finance, healthcare, technology, legal, and professional services.
  |  By CultureAI Team
AI is no longer a fringe productivity experiment inside organisations, it is embedded, habitual, and increasingly invisible. This snapshot from CultureAI’s January usage data highlights how AI is actually being used across everyday workflows, and where risk is forming as a result. Rather than focusing on hypothetical threats or model-level concerns, the findings below surface behavioural signals from real interactions: prompts, file uploads, and context accumulation.
  |  By CultureAI
In this one-off exclusive podcast, Oliver Simonnet, CultureAI's Lead Cyber Security Researcher, sits down with William Jardine, Director at Reversec, and Richard Moore, CISO at 10x Banking, to explore the evolving realities of cyber resilience, AI adoption, and security leadership in a world where AI-driven workflows are becoming the norm.
  |  By CultureAI
Monitor, reduce, and fix your human cyber risks. The CultureAI Human Risk Management Platform enables security teams to proactively monitor human risk across multiple applications, providing immediate visibility into the riskiest employee behaviours and security vulnerabilities within an organisation.
  |  By CultureAI
Discover:✅ Why even the savviest individuals struggle to avoid phishing traps, especially amidst multiple software sign-ups and cloud managed services. ✅ From an organisation's standpoint, why acknowledging and reporting phishing attempts, like John's simulated case, is a crucial step towards better security.
  |  By CultureAI
Discover:✅ Why even the savviest individuals struggle to avoid phishing traps, especially amidst multiple software sign-ups and cloud managed services. ✅ From an organisation's standpoint, why acknowledging and reporting phishing attempts, like John's simulated case, is a crucial step towards better security.
  |  By CultureAI
Discover:✅ Why even the savviest individuals struggle to avoid phishing traps, especially amidst multiple software sign-ups and cloud managed services. ✅ From an organisation's standpoint, why acknowledging and reporting phishing attempts, like John's simulated case, is a crucial step towards better security.
  |  By CultureAI
Discover:✅ Why even the savviest individuals struggle to avoid phishing traps, especially amidst multiple software sign-ups and cloud managed services. ✅ From an organisation's standpoint, why acknowledging and reporting phishing attempts, like John's simulated case, is a crucial step towards better security.
  |  By CultureAI
Discover:✅ Why even the savviest individuals struggle to avoid phishing traps, especially amidst multiple software sign-ups and cloud managed services. ✅ From an organisation's standpoint, why acknowledging and reporting phishing attempts, like John's simulated case, is a crucial step towards better security.
  |  By CultureAI
Discover:✅ Why even the savviest individuals struggle to avoid phishing traps, especially amidst multiple software sign-ups and cloud managed services. ✅ From an organisation's standpoint, why acknowledging and reporting phishing attempts, like John's simulated case, is a crucial step towards better security.
  |  By CultureAI
Discover:✅ Why even the savviest individuals struggle to avoid phishing traps, especially amidst multiple software sign-ups and cloud managed services. ✅ From an organisation's standpoint, why acknowledging and reporting phishing attempts, like John's simulated case, is a crucial step towards better security.
  |  By CultureAI
Discover:✅ The current state of the security awareness and training market✅ The future of Human Risk Management and how it is evolving✅ The importance of defining job roles in Human Risk Management✅ How to quantify and measure data related to Human Risk Management.

CultureAI’s innovative Human Risk Management Platform empowers you to identify security risks, educate employees in real time, and nudge them to make immediate fixes.

Strengthen resilience against phishing, improve SaaS security, reduce data loss through generative AI, and more. We help security teams identify and manage their most prominent employee security risks in one comprehensive platform.

End-to-end Human Risk Management:

  • Monitor 40+ Human Risks: Surface, track and manage risks created by employees and understand where you're vunerable.
  • Security Coaching: Reduce the number of risky behaviours using risk data to drive personalised coaching that improves behaviour.
  • Automated Interventions: Where possible, reduce risks further by automatically mitigating employees' risky behaviour.
  • Security Nudges: Just-in-time notifications that nudge employees to their own risks and offer one-click solutions.

The #1 platform to improve cyber security behaviours and reduce security incidents caused by employees.