Invisible Instructions: Multimodal AI is Already Being Tricked

Invisible Instructions: Multimodal AI is Already Being Tricked

In this clip from "Securing AI Part 4: The Rising Threat of Hidden Attacks in Multimodal AI," Diptanshu Purwar and Madhav Aggarwal respond to Jamison Utter's example of a new, well-known form of multimodal attack: abusing AI resume screeners by exploiting both text and visual processing.

The Resume Attack: White-on-White Text

🔘 The Attack Vector: Jamison describes the "crafty" tactic of including white-on-white text in a resume. A human eye ignores this hidden text, but the AI screening tool reads the embedded, deceptive text, which contains instructions to filter the candidate to the top.

🔘 Beyond Resumes: Madhav points out that this form of implicit information watermarking is already happening beyond resumes, citing people putting similar LLM instructions in their LinkedIn bios to ensure they are recommended for job postings.

🔘 The Multimodal Challenge: This attack highlights the core challenge of multimodal systems: they interpret vision and text differently, allowing attackers to hide attack data that is invisible to human review but perfectly legible to the machine.

Watch the full episode for a deep dive into securing AI agents against multimodal attacks, language switching, and model drift.

Jamison Utter | A10 Networks
Madhav Aggarwal | A10 Networks
Diptanshu Purwar | A10 Networks

Learn how to secure AI and LLMs in your organization: https://bit.ly/4kOHmYd

#multimodalai #aisecurity #promptinjection #resumehacks #linkedinbio #hiddentext #a10networks #cybersecurity