Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Critical vLLM Flaw Exposes the Soft Underbelly of AI Infrastructure

While the world worries about "jailbreaking" LLMs or preventing them from hallucinating, a critical new vulnerability has just reminded us of a fundamental truth: AI is just software, and software has bugs. A newly discovered critical flaw (CVE-2025-62164) in vLLM, one of the most popular libraries for serving large language models, allows attackers to achieve Remote Code Execution (RCE) or crash servers simply by sending a malicious API request. This isn't a failure of the AI model.

Hackers hijack Google Smart Home #aisecurity #mcpserver

Building AI agents that can think, act, and adapt securely isn't easy. From prompt design to deployment, every stage brings new challenges and new risks. In this session, Bar-El Tayouri, Head of Mend AI at Mend.io, and Yehoshua (Shuki) Cohen, VP of Data and AI Evangelist at AI21 Labs, shared practical strategies for designing and defending agentic systems that actually deliver. Key topics covered: Originally recorded: October 29, 2024.

Indirect Prompt Injection Attacks: A Lurking Risk to AI Systems

The rapid adoption of AI has introduced a new, semantic attack vector that many organizations are ill-prepared to defend against: prompt injection. While many security teams understand the threat of direct prompt injection attacks against AI agents developed by their organizations, another more subtle threat lurks in the shadows: indirect prompt injection attacks.

Why AI Security Requires Context: Introducing Issues & the Correlation Agent

Data is never the problem. Security teams rarely complain about having too much of it. The real danger comes from data that sits unconnected and unexplained. What teams actually need is data that is actionable and converges into meaning. Data that cuts deeper than surface level signals. Data that reveals what is unfolding and what needs to happen next.

Safe Harbor: An Open Source "Abort Mission" Button for Your AI Agent

AI agents are increasingly connecting to more systems and workflows. They read structured data, follow multi-step instructions, and can reach deep into applications and developer environments. The same capabilities that make them powerful also create new opportunities for attackers. As Zenity Labs continued to study these emerging attack classes, we noticed a pattern starting to appear.

Your Browser is Becoming an Agent. Zenity Keeps It From Becoming a Threat.

Agentic browsers are quickly becoming part of everyday work. Tools like ATLAS, Comet, and Dia can read web content, navigate SaaS tools, interpret instructions, and act on behalf of a user. They promise faster execution and higher productivity but they also introduce new risks that traditional security tools are not designed to see. As these browser-based agents spread across both managed and unmanaged devices, the enterprise attack surface grows in ways that most teams can’t quantify.

SOAR in the AI era: How SAP uses intelligent workflows to build an AI SOC

SOAR was created to help security teams work faster and more consistently by automating and orchestrating core security operations. It has always had to adapt to new and evolving technologies, but our current AI era has brought about a turning point. As cloud environments scale, manual playbooks can’t keep up. Now, it’s not enough to automate. We need systems that can understand the context they’re running in and adapt accordingly.

The New AppSec Reality: AI Anxiety, Silent Flaws, and Supply Chains

We recently published a series of polls across our social channels to get a pulse on some of today’s application security concerns with AI. These recent conversations with our community reveal a clear and urgent shift in the application security landscape. Results show that while established challenges like software supply chain security remain top of mind, the rapid pace of AI has created a new center of gravity for anxiety.