Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Your Convenient AI Agent Is a Backdoor to Your Files #agenticai #promptinjection

People are installing powerful AI agents on everyday laptops without realising those tools can access files, emails and operating system functions. Once prompt injected, that agent can behave like a malicious version of its user, which turns convenience into a direct path for deletion, exfiltration and loss of control.

The AI Supply Chain is Actually an API Supply Chain: Lessons from the LiteLLM Breach

The recent supply chain attack involving Mercor and the LiteLLM vulnerability serves as a massive wake-up call for enterprise security teams. While the security industry has spent the last year fixating on prompt injections and model jailbreaks, this breach highlights a far more systemic vulnerability. The weakest link in enterprise AI is not necessarily the model itself. It is the middleware connecting the models to your data.

AI Governance and Risk: Expert Insights for Enterprise Leaders

‍ As GenAI tools become embedded in core business operations, the governance programs meant to oversee them are still catching up. Closing that gap requires visibility into where AI operates and the ability to express exposure in financial terms that leadership can act on. The organizations best positioned to manage AI risk are those that have already started treating it as a measurable business variable rather than an abstract operational concern. ‍

Episode 12 - The Agentic SOC: Upleveling Analysts with AI Knowledge Multipliers

Richard Bejtlich sits down with Stan Kiefer, Corelight’s Senior Manager for Data Science, to discuss how AI serves as a vital "abstraction layer" and "knowledge multiplier" for security analysts. Stan explains that while AI can synthesize complex information, it remains untrustworthy without high-fidelity network data at its center to provide verifiable evidence. The episode explores the shift toward an "agentic ecosystem" and a tiered architecture where a central orchestrator manages specialized sub-agents to accelerate detection and investigation.

Evil Token: AI-Enabled Device Code Phishing Campaign

On April 6, 2026, Microsoft Defender Security Research published an advisory detailing a large-scale phishing campaign that leverages the OAuth Device Code Authentication flow to compromise Microsoft 365 accounts across organizations globally. This campaign represents a significant evolution from manual social engineering to fully automated, AI-driven attack infrastructure.

AI in the SOC with Joshua Neil

Join us for this week's Defender Fridays as we explore AI in the SOC with Josh Neil, Co-founder of Alpha Level. At Defender Fridays, we delve into the dynamic world of information security, exploring its defensive side with seasoned professionals from across the industry. Our aim is simple yet ambitious: to foster a collaborative space where ideas flow freely, experiences are shared, and knowledge expands.

Why You Can't Defend Against Prompt Injection

Prompt injection works because language models struggle to tell the difference between trusted instructions and untrusted user content. Unlike SQL injection or cross site scripting, there is no clean deterministic defence, which leaves code, libraries and AI workflows open to manipulation at multiple points.