Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI Workload Security for Healthcare: What CISOs Need to Prove Under HIPAA

A patient calls your privacy office and requests an accounting of every disclosure of her PHI made outside treatment, payment, and healthcare operations over the past six years. This is her right under HIPAA. Your privacy officer pulls the EHR disclosure log. It is complete through the day your organization deployed its first production AI agent.

How to investigate cloud credential compromise with Bits AI Security Analyst

Cloud environments create a flood of security signals, often reaching tens of thousands per day depending on the organization’s size. Security engineers and analysts spend a disproportionate share of their time triaging these signals instead of acting on legitimate threats. But the time-intensive parts of that work, such as identifying related signals and building a timeline, can be handled systematically, leaving teams free to focus on what actually requires human judgment.

Evaluate, optimize, and secure your Google Cloud AI stack with Datadog

As AI adoption accelerates on Google Cloud, the challenge for most teams today is no longer just building AI-powered applications. It’s also managing the full AI stack from end to end, including data pipelines, infrastructure, release process, and security operations. Many teams are monitoring these layers with different tools, creating complexity, fragmenting visibility, and slowing decisions on what to do next.

Securing air-gapped environments with Elastic on Google Distributed Cloud

If you are not using AI to defend against AI, you will lose. But for organizations operating in air-gapped environments, the path to AI-driven defense can be blocked by the very isolation that protects them. Today, we're announcing that Elastic Security is now the embedded security layer for Google Distributed Cloud (GDC) air-gapped environments, expanding our collaboration with Google Cloud.

Anthropic's Mythos and the New Reality of AI Cybersecurity Risk

I was on ABC News recently discussing why banks are on alert as new AI systems like Anthropic’s Claude Mythos raise cybersecurity concerns. What struck me most is how quickly the conversation has shifted. This is no longer a hypothetical risk or something we are planning for in the future. Financial institutions and regulators are reacting in real time to what AI is already capable of doing. From my perspective, we are still underestimating how fast this is moving.

You're Not Watching MCPs. Anthropic's Vulnerability Shows Why You Should Be.

Last week, researchers at OX Security published findings that should stop every security leader in their tracks. They discovered a critical vulnerability baked directly into Anthropic's Model Context Protocol SDK, affecting every supported language: Python, TypeScript, Java, and Rust. The result: remote code execution on any system running a vulnerable MCP implementation, with direct access to sensitive user data, internal databases, API keys, and chat histories. Over 7,000 publicly accessible servers.

AI SecOps Worskhop Series: Accelerating Cloud Security Operations with Claude Code and LimaCharlie

In this workshop we will show how to use Claude Code with LimaCharlie to accelerate cloud security operations. We will have Claude Code deploy agents, create detections and identify issues before they become incidents. This hands-on workshop is designed to demonstrate the transformative power of integrating Anthropic's Claude Code, with the versatile security platform, LimaCharlie. Our focus will be on leveraging the capabilities of Claude Code to significantly accelerate and streamline various aspects of cloud security operations, turning reactive tasks into proactive, automated workflows.

Logging Is Not Observability: The AI Security Gap MSSPs Can't Ignore

Every MSSP is fielding the same question from clients right now:"Are we safe with AI?" Most are answering with some version of"yes, we're logging everything." In a recent Defender Fridays episode, Saurabh Shintre, Founder and CEO of Realm Labs drew a hard line between these two concepts."You can log prompt and response and this bare minimum you have to do.

Beyond the Prompt: Data Security in Generative AI Platforms

Generative AI tools have changed how people work and play online. Everyone is excited about the speed and creativity these systems offer. Users often type sensitive info into prompts without thinking about where it goes. Security experts worry about how these platforms handle personal data. It is easy to forget that anything typed into a public bot might be stored. Staying safe means knowing how to use these tools without giving away secrets.