Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

We let OpenClaw loose on an internal network. Here's what it found

Following our article on the challenges posed by agentic AI, we gave OpenClaw access to one of our legacy networks In my previous article on OpenClaw I wrote: “Even the most ‘risk-on’ organizations with deep AI and security experience, will likely find it challenging to configure OpenClaw in a way that effectively mitigates the risk of compromise or data loss, while still retaining any productivity value.” The Red Team here at Sophos took that as ‘challenge accepted’, s

The NVD Funding Crisis Was Bigger Than Mythos

Everyone is calling Claude Mythos a watershed moment. I’d like to offer a slightly different take. Not because the capability isn’t real, it is. But if Mythos is the moment that finally convinced your organization that rapid vulnerability discovery is an existential threat, you’ve been watching the wrong thing. We saw this coming. Vulnerability Management has been moving in this direction for years, and we built Nucleus with this trajectory in mind. What surprises me is the surprise.

New Data Shows Why Security Teams Can't Keep Up With AI-Driven Attacks

AI is changing how attacks happen, and how fast they happen. Seemplicity’s 2026 State of Exposure Management report shows why most security teams aren’t struggling to find risk, but to fix it quickly enough. Based on insights from 300 security leaders, it highlights where remediation breaks down, how AI is being used today, and why execution is becoming the real bottleneck.

Opti9 Becomes Authorized Anthropic Reseller via Amazon Bedrock

Opti9 recently announced it has been approved as an authorized reseller for Anthropic models through Amazon Bedrock, further strengthening its ability to deliver secure, enterprise-grade AI solutions on Amazon Web Services (AWS). In October, AWS enabled its Solution Provider Partners to resell Amazon Bedrock, a fully managed service that provides access to a wide range of leading foundation models from top providers.

The Era of Agentic Security is Here: Key Findings from the 1H 2026 State of AI and API Security Report

The era of human-centric API consumption is officially ending. Over the past year, enterprises have rapidly transitioned from simply experimenting with Generative AI to deploying autonomous AI agents that drive core business operations. These agents act as digital employees. They utilize Large Language Models (LLMs) for reasoning, Model Context Protocol (MCP) servers for connectivity, and internal APIs for execution. This evolution has fundamentally altered the enterprise attack surface.

How to Handle AI Policy Enforcement in the Era of Shadow AI

Here’s the reality most security teams are already living: over 80% of employees are using unapproved AI tools at work, and nearly half are actively hiding it from IT. The question facing every organization is no longer whether to adopt artificial intelligence — it’s how to secure the sensitive data flowing into it every single day. This is the governance gap.

Secret Scanning For AI Coding Tools With ggshield

Introducing ggshield AI hooks from GitGuardian to help stop AI coding assistants from leaking secrets. See how ggshield can scan prompts, tool calls, file reads, MCP calls, and tool output inside AI coding tools like Cursor, Claude Code, and VS Code with GitHub Copilot. When a secret is detected, ggshield can block the action before sensitive data is sent or exposed. You will also see how simple the setup is, with flexible install options for local or global use. This adds practical guardrails to AI-assisted development and helps teams move fast without increasing secret sprawl.