Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI SecOps Worskhop Series: Accelerating Cloud Security Operations with Claude Code and LimaCharlie

In this workshop we will show how to use Claude Code with LimaCharlie to accelerate cloud security operations. We will have Claude Code deploy agents, create detections and identify issues before they become incidents. This hands-on workshop is designed to demonstrate the transformative power of integrating Anthropic's Claude Code, with the versatile security platform, LimaCharlie. Our focus will be on leveraging the capabilities of Claude Code to significantly accelerate and streamline various aspects of cloud security operations, turning reactive tasks into proactive, automated workflows.

You're Not Watching MCPs. Anthropic's Vulnerability Shows Why You Should Be.

Last week, researchers at OX Security published findings that should stop every security leader in their tracks. They discovered a critical vulnerability baked directly into Anthropic's Model Context Protocol SDK, affecting every supported language: Python, TypeScript, Java, and Rust. The result: remote code execution on any system running a vulnerable MCP implementation, with direct access to sensitive user data, internal databases, API keys, and chat histories. Over 7,000 publicly accessible servers.

Anthropic's Mythos and the New Reality of AI Cybersecurity Risk

I was on ABC News recently discussing why banks are on alert as new AI systems like Anthropic’s Claude Mythos raise cybersecurity concerns. What struck me most is how quickly the conversation has shifted. This is no longer a hypothetical risk or something we are planning for in the future. Financial institutions and regulators are reacting in real time to what AI is already capable of doing. From my perspective, we are still underestimating how fast this is moving.

Securing air-gapped environments with Elastic on Google Distributed Cloud

If you are not using AI to defend against AI, you will lose. But for organizations operating in air-gapped environments, the path to AI-driven defense can be blocked by the very isolation that protects them. Today, we're announcing that Elastic Security is now the embedded security layer for Google Distributed Cloud (GDC) air-gapped environments, expanding our collaboration with Google Cloud.

Evaluate, optimize, and secure your Google Cloud AI stack with Datadog

As AI adoption accelerates on Google Cloud, the challenge for most teams today is no longer just building AI-powered applications. It’s also managing the full AI stack from end to end, including data pipelines, infrastructure, release process, and security operations. Many teams are monitoring these layers with different tools, creating complexity, fragmenting visibility, and slowing decisions on what to do next.

How to investigate cloud credential compromise with Bits AI Security Analyst

Cloud environments create a flood of security signals, often reaching tens of thousands per day depending on the organization’s size. Security engineers and analysts spend a disproportionate share of their time triaging these signals instead of acting on legitimate threats. But the time-intensive parts of that work, such as identifying related signals and building a timeline, can be handled systematically, leaving teams free to focus on what actually requires human judgment.

AI Workload Security for Healthcare: What CISOs Need to Prove Under HIPAA

A patient calls your privacy office and requests an accounting of every disclosure of her PHI made outside treatment, payment, and healthcare operations over the past six years. This is her right under HIPAA. Your privacy officer pulls the EHR disclosure log. It is complete through the day your organization deployed its first production AI agent.

AI Workload Discovery: How to Find Every AI Agent Running in Your Clusters

A CISO at a mid-sized SaaS company pulls her platform lead aside after a board meeting. One question: “Do we have AI agents running in production?” The lead pauses. He knows the data science team has been experimenting with LangChain. He remembers a conversation about a customer-support pilot. He thinks there might be an inference server in staging that got promoted last quarter.

Implementing AI Agent Security on Azure AKS: A Practical Guide

Your platform team deployed eBPF-based runtime sensors on AKS last week. Defender for Containers is enabled. Azure Policy is enforcing pod security standards across your AI workload namespaces. And your Observe pillar is still blind — because nobody enabled the Diagnostic Setting that routes kube-audit logs to the Log Analytics workspace where your tooling can actually consume them.