The Invisible Trick: How to Fool an AI Agent A10 Networks' security experts, Jamison Utter, Madhav Aggarwal, and Diptanshu Purwar, discuss a classic example of an adversarial attack that tricks an AI agent using the equivalent of invisible watermarks. Madhav explains how researchers used an invisible watermark in a research paper that, when scanned by an AI agent, would automatically trigger a positive review. This watermark was not visible to human reviewers. This clever manipulation highlights a significant vulnerability in AI models: they can be influenced by hidden data in their input.