Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

6 Lessons Security Leaders Must Learn About AI and APIs

Most organizations treating AI security as a model problem are defending the wrong layer. Security teams filter prompts, patch jailbreaks, and tune model behavior, which is all necessary work, while the actual attack surface sits largely unexamined underneath. That surface is the API layer: the endpoints AI systems use to retrieve data, call tools, and take action on behalf of users. This isn't a theoretical gap.

Chipotle Bot Hacked! AI Fails: Live Laugh Logs ep1

What happens when 20,000 engineers descend on Amsterdam to talk about Kubernetes and AI? Welcome to Episode 1 of Live Laugh Logs, the podcast from Annie, Lewis and Andre from the Coralogix Developer Relations team where we will get together and recap everything going on in our worlds! We had an amazing time at KubeCon in Amsterdam and had loads of insights from the talks we went to around designing observability systems, all the AI tools being created and how to observe them, and using agent-generated code.

Exposure Prioritization Agent: Demo Drill Down

Vulnerability volume continues to rise, making it difficult for security teams to determine which exposures actually matter. Without clear prioritization, teams are forced to react to volume, often focusing on severity scores instead of real risk. In this demo drill down, we showcase the Exposure Prioritization Agent within Falcon Exposure Management. You’ll see how AI-driven prioritization uses ExPRT.AI, adversary intelligence, and business context to reduce millions of vulnerabilities into a focused set of high-risk exposures.

Measuring Real Risk Reduction Across Your Security Stack

Garrett Hamilton recently presented at the North Texas ISSA Lunch & Learn in Plano, TX to talk about what risk reduction actually looks like in practice. Reach shows customers exactly which controls they've deployed, the user impact of those changes, and how much risk has been reduced across IAM, EDR, email, firewall, and SASE. Not feature checklists. Targeted, measurable outcomes tied to the business.

Qinglong task scheduler RCE vulnerabilities exploited in the wild for cryptomining

In early February 2026, users of Qinglong (青龙), a popular open source timed task management platform with over 19,000 GitHub stars, began reporting that their servers were maxing out CPU usage. The cause was a cryptominer binary called.fullgc, deployed through two authentication bypass vulnerabilities that allowed unauthenticated remote code execution. The attacks went largely unnoticed in the English-speaking security community.

The Configuration Drift Behind the Teams Helpdesk Breach

On April 22, 2026, Google's Threat Intelligence Group and Mandiant disclosed a campaign by a threat actor they're tracking as UNC6692. The group breached enterprise networks by impersonating IT helpdesk staff over Microsoft Teams, ultimately exfiltrating Active Directory databases and achieving full domain compromise. What's notable about UNC6692 is what they didn't do. They didn't use a zero-day. They didn't exploit a software vulnerability.

Agentic SecOps: Build a security AI agent that automatically investigates detections

A credential access event fired. An AI agent investigated it, correlated it against running processes, assessed the risk, and closed the ticket. No analyst touched it. The entire loop ran in minutes. This is what security operations look like when AI can actually operate in the environment rather than advise from outside it. Security operations have always required a special kind of person.

How Do AI Agents Create Data Exfiltration Risk?

AI agents create data exfiltration risk by combining three capabilities that are dangerous together: access to private data, exposure to untrusted content, and the ability to communicate externally. When all three exist in one agent, an attacker can hide instructions inside an email, document, or webpage the agent processes and trick it into sending sensitive data out. No software vulnerability is required. The attacker doesn't need to break in. They just need to talk to your agent.

Privacy in Enterprise AI: Why It's the Foundation, Not a Feature

Last week, OpenAI released Privacy Filter, an open-weight model for detecting and redacting PII in text. It is a thoughtful release: Apache 2.0 licensed, able to run locally, designed for high-throughput workflows, and built to go beyond regex-based detection. This is good news for everyone building enterprise AI. Privacy at the model layer is getting real attention. What we liked most was how clearly OpenAI described the role of the model.