Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Multimodal Attacks and Model Drift: The Future of AI Exploitation

Multimodal Attacks and Model Drift: The Future of AI Exploitation A10 security experts Jamison Utter, Diptanshu Purwar, and Madhav Aggarwal discuss the critical vulnerabilities emerging from multimodal AI agents (systems that perceive, decide, and act) and the absolute need for security mechanisms external to the Large Language Model (LLM) itself. The experts dive into why traditional security is failing and what the next evolution of defense must look like.

Invisible Instructions: Multimodal AI is Already Being Tricked

Invisible Instructions: Multimodal AI is Already Being Tricked In this clip from "Securing AI Part 4: The Rising Threat of Hidden Attacks in Multimodal AI," Diptanshu Purwar and Madhav Aggarwal respond to Jamison Utter's example of a new, well-known form of multimodal attack: abusing AI resume screeners by exploiting both text and visual processing. The Resume Attack: White-on-White Text.

Securing AI: Why Vision Models Struggle with Transparency and Depth

Securing AI: Why Vision Models Struggle with Transparency and Depth In this clip from "Securing AI, Part 4," A10 security expert Madhav Aggarwal highlights a fundamental challenge still faced by even the most popular AI vision models and chatbots: transparent objects. Madhav explains how these corner cases—situations that are obvious to a human but complex for a machine—can easily throw an AI model "completely off.".

From Model Drift to API Exploitation: The Next Challenge in AI Security

From Model Drift to API Exploitation: The Next Challenge in AI Security In this clip from "Securing AI Part 4: The Rising Threat of Hidden Attacks in Multimodal AI," Diptanshu Purwar and Madhav Aggarwal summarize why external guardrails are the only sustainable defense against the new wave of AI exploitation. Jamison Utter then sets the stage for the next topic in the series: securing the fundamental protocols and APIs that AI agents rely on.

Language Switching Attacks: The New Threat Vector in LLM Security

Language Switching Attacks: The New Threat Vector in LLM Security In this clip from "Securing AI Part 4: The Rising Threat of Hidden Attacks in Multimodal AI," Diptanshu Purwar discusses the growing trend of language-switching attacks. These techniques exploit the ongoing development and training gaps in Large Language Models (LLMs). Diptanshu explains how attackers can evade an LLM's built-in filters and guardrails by rapidly shifting between different languages, particularly less common ones, to find weaknesses where the model's safety data is sparse.