Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The AI Supply Chain is Actually an API Supply Chain: Lessons from the LiteLLM Breach

The recent supply chain attack involving Mercor and the LiteLLM vulnerability serves as a massive wake-up call for enterprise security teams. While the security industry has spent the last year fixating on prompt injections and model jailbreaks, this breach highlights a far more systemic vulnerability. The weakest link in enterprise AI is not necessarily the model itself. It is the middleware connecting the models to your data.

The Era of Agentic Security is Here: Key Findings from the 1H 2026 State of AI and API Security Report

The era of human-centric API consumption is officially ending. Over the past year, enterprises have rapidly transitioned from simply experimenting with Generative AI to deploying autonomous AI agents that drive core business operations. These agents act as digital employees. They utilize Large Language Models (LLMs) for reasoning, Model Context Protocol (MCP) servers for connectivity, and internal APIs for execution. This evolution has fundamentally altered the enterprise attack surface.

Building Smarter Virtual Assistants with Gemini 3 Flash API: AI for Seamless Workflow Automation

As teams become more distributed and workloads continue to increase, the need for effective automation tools has never been greater. Traditional methods of collaboration often fall short when it comes to handling repetitive tasks, managing high volumes of information, or providing real-time, intelligent support. That's where AI virtual assistants come in, changing how teams collaborate, streamline workflows, and boost productivity.

Everyone is Deploying AI Agents. Almost Nobody Knows What They're Doing

AI agents are operating inside your enterprise; querying databases, triggering workflows, and taking action through APIs. As AI agents are adopted, organizations cannot see, track, or control what these agents are actually doing. In this session, Roey Eliyahu, Co-Founder and CEO of Salt Security, challenges the industry’s narrow focus on LLM safety and exposes the much larger, invisible attack surface created by agentic systems.

Codex API In DevSecOps: Balancing Developer Speed With Secure Code Review

AI-assisted coding is no longer a side experiment. It is becoming part of daily engineering workflows, from drafting functions and refactoring legacy code to generating tests and accelerating routine implementation work. That shift is why the Codex API now belongs in a broader DevSecOps conversation, not just a developer productivity discussion.

The Agentic Stack Explained: How LLMs, MCP Servers, and APIs Work Together

The term AI agent is dominant in current cybersecurity discourse. Vendors, analysts, and CISOs all use the label, yet technical confusion remains regarding how agents actually operate and where the security risks reside. Beneath the surface-level familiarity, there is often significant confusion about what an AI agent actually is, how it operates technically, and most importantly for security teams, where the risk actually lives.

How does Sisense stay on top of API Attacks?

Sisense powers analytics experiences inside the applications businesses rely on every day. As an API-first platform, securing those connections is critical, especially as AI agents increasingly operate through APIs to access data and trigger workflows. In this conversation, Sangram, CISO and VP of IT at Sisense, and Michael Callahan, CMO at Salt Security, discuss how Sisense approached API security strategically to protect their platform, maintain customer trust, and support innovation in the Agentic AI era.

Open Banking API Security: The Complete Guide for 2026

Global Open banking API call volumes are set to cross the 720 billion mark by 2029, and attackers know it. With the global open banking market surging past $38 billion in 2025 itself and projected to exceed $115 billion by 2030, the financial data flowing through these APIs is highly lucrative for threat actors. With over 7.5 million calls made to just AI APIs, they have now graduated from a technical challenge to a business imperative.

I Didn't Revoke my API Keys Because Claude Called Me An Idiot

I need to confess something. A few days ago whilst vibe coding at 2am (which can end up burning through tokens like they are going out of fashion) I accidentally pasted my API key directly into a Claude chat instead of the terminal window I had open. Claude told me off. It felt like a full, proper, disappointed parent tone; the AI equivalent of 'I'm not angry, just disappointed', except it absolutely was angry. There may have been paragraphs.