Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

I Didn't Revoke my API Keys Because Claude Called Me An Idiot

I need to confess something. A few days ago whilst vibe coding at 2am (which can end up burning through tokens like they are going out of fashion) I accidentally pasted my API key directly into a Claude chat instead of the terminal window I had open. Claude told me off. It felt like a full, proper, disappointed parent tone; the AI equivalent of 'I'm not angry, just disappointed', except it absolutely was angry. There may have been paragraphs.

Best Practices for Implementing AI Agents

On March 9th, Codewall.ai disclosed how it had hacked McKinsey & Company’s AI platform called Lilli, a purpose-built system for 43,000+ employees to analyze documents, chat, and access decades of proprietary research. The researchers unleashed an AI agent which quickly scanned 200 endpoints, identified 22 that did not require authentication, and one that wrote user search queries into a database including non-parameterized JSON keys which were concatenated directly into SQL.

The Future of Superintelligent Security Operations Starts with Data Built for AI

Every major shift in security operations starts with a shift in the underlying platform. The AI era is no different. As artificial intelligence moves from novelty to necessity, the real dividing line in cybersecurity will not be which vendor can add AI features the fastest. It will be which platforms are built on the right foundation to make AI useful in real operations and trustworthy when the stakes are high. That foundation is data, but not in the simplistic sense the market often uses the term.

The AI Malware Surge: Behavior, Attribution, and Defensive Readiness

Over the last year, AI-assisted malware development has evolved from an experimental practice into a common part of the attacker toolkit. In a rolling window from February 2025 to February 2026, Arctic Wolf Labs observed over 22,000 distinct files triggering AI-focused YARA rules across multiple malware repositories. These files included AI-generated code, large language model (LLM)-style scaffolding, runtime AI API integration, and DeepSeek-derived artifacts.

Agentic Context Security Platform Protecto is Now Available on Google Cloud Marketplace

Enterprise Agentic AI adoption faces a critical barrier: sensitive data exposure. AI agents perform tasks only as well as the context provided to them. However, context is precisely where enterprise data enters the workflow, introducing significant risk. Organizations need to deploy AI applications while maintaining strict data security, regulatory compliance, and privacy. This challenge stalls production deployments across enterprises, especially in healthcare and financial services.

Ep 35: RSAC FOMO? Dojo AI Demo

As we gear up for RSA Conference, we give viewers a sneak peek at Sumo Logic's SOC analyst agent, which turns a 45-minute analyst investigation into a five-minute AI-powered sprint. We walk through live demos showing how the agent automatically generates queries, maps threats to MITRE ATT&CK, and hands you recommended remediation actions all without making you switch tabs or tools. We also show off MCP integration that lets teams collaborate on active investigations right from Slack, because no one should be chained to their war room when there's dinner to be had.

WebPromptTrap - New Indirect Prompt Injection Vulnerability in BrowserOS

Cato researchers have discovered a new indirect prompt injection exploit pattern workflow in BrowserOS (an open-source agentic AI browser). We named it “WebPromptTrap” because the prompt originates from untrusted web content and it traps users into approving an authorization step through a trusted-looking AI summary.

Spring 2026 GenAI Code Security Update: Despite Claims, AI Models Are Still Failing Security

The last six months have been nothing short of revolutionary for AI-powered coding. OpenAI‘s “Code Red” release brought us GPT-5.1 and 5.2. Google unveiled Gemini 3 with its touted “unprecedented reasoning capabilities.” Anthropic rolled out Claude 4.5 and 4.6, powering the increasingly ubiquitous Claude Code features. Enterprise adoption of tools like OpenClaw has exploded, with developers praising unprecedented productivity gains.

The AI Control Gap: Why Partners Are Now on the Front Line

For channel partners, AI has quickly moved from a future conversation to a current customer problem. Clients are already using AI across their organisations, often faster than governance can keep up. What’s emerging is not just another technology trend, but a new class of risk that customers cannot fully see or control. Our latest research, based on insights from senior security leaders in highly regulated industries, highlights the scale of the issue.

The Library That Holds All Your AI Keys Was Just Backdoored: The LiteLLM Supply Chain Compromise

We just published a deep breakdown of the Trivy supply chain attacks yesterday. Twenty-four hours later, we’re writing about the next one. Same threat actor. Different target. Worse implications. This time it’s LiteLLM, the Python library that acts as a universal API gateway for over 100 LLM providers. If you’re building anything with AI agents, MCP servers, or LLM orchestration, there’s a good chance LiteLLM is somewhere in your dependency tree.