Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Cyberattacks tied to conflict in Iran, open source exploit & AI espionage / Intel Chat [#306]

In this episode of The Cybersecurity Defenders Podcast, we discuss some intel being shared in the LimaCharlie community. Support our show by sharing your favorite episodes with a friend, subscribe, give us a rating or leave a comment on your podcast platform. This podcast is brought to you by LimaCharlie, maker of the SecOps Cloud Platform, infrastructure for SecOps where everything is built API first. Scale with confidence as your business grows.

Where AI in the SOC is actually delivering - and where it isn't

Where AI in the SOC is actually delivering — and where it isn’t“We’ll have a generation of security professionals who can supervise AI but can’t function without it." For all the noise surrounding “agentic AI” in cybersecurity, security operations centers are still wrestling with the same fundamental questions: What does AI genuinely improve today? Where does it fall short? How can organizations tell the difference?

Cloudflare Client-Side Security: smarter detection, now open to everyone

Client-side skimming attacks have a boring superpower: they can steal data without breaking anything. The page still loads. Checkout still completes. All it needs is just one malicious script tag. If that sounds abstract, here are two recent examples of such skimming attacks: To further our goal of building a better Internet, Cloudflare established a core tenet during our Birthday Week 2025: powerful security features should be accessible without requiring a sales engagement.

AI Adoption Surging in Financial Services - But Control Lagging

Artificial intelligence is moving rapidly from experimentation into everyday use across financial services. From client servicing and research to operations and risk analysis, AI is increasingly embedded in core workflows. This shift is widely recognised within the industry. Recent research indicates that 67% of financial services organisations report rapid AI adoption, with 93% ranking AI as a top security priority heading into 2026. At the same time, governance structures are being established.

AI Agent Security Framework on AWS EKS: Implementation Guide

You’ve enabled GuardDuty EKS Runtime Monitoring across your clusters. You’ve configured IRSA for your Bedrock-calling agents. CloudTrail is logging every bedrock:InvokeModel event. And last Tuesday, one of your AI agents exfiltrated 12,000 customer records through a sequence of API calls that every one of those tools recorded as completely normal—because at the control plane level, they were.

AI Workload Security on Azure: Evaluating Defender for Cloud Against Specialized Runtime Tools

Your SOC gets a Defender for Cloud alert: “Suspicious API call from AI workload pod.” You click through and find a LIST secrets call against the Kubernetes API server from a pod running your invoice-processing agent on AKS. The pod’s Workload Identity has Contributor access to your key vault. By the time your analyst opens the AKS Security Dashboard, the pod has been rescheduled.

Claude Code Auto Mode: What It Means for AI Agent Privilege Management

Anthropic’s new Claude Code Auto Mode Auto Mode is generating well-deserved attention. It introduces a classifier that sits between the developer and every tool call, reviewing each action for potentially destructive behavior before it executes. It’s a real improvement over the only previous alternative to manual approval: the –dangerously-skip-permissions flag. But the announcement is also useful for a broader reason.

Securing OpenClaw Access So It Can't Go Rogue

In this video, we demonstrate how to securely grant an AI agent (OpenClaw) access to Teleport-protected Kubernetes resources using Teleport Machine Identity and tbot, without exposing secrets, API keys, or long-lived tokens. You’ll see how Teleport treats AI agents as first-class identities, enforcing strict RBAC controls so the agent can only do what it’s allowed to do, like reading logs, while being blocked from sensitive actions like deleting resources or accessing secrets.