Are LLMs becoming messengers for attackers? #ai #cybersecurity
AI assistants with broad enterprise access are creating a new attack vector. Chris Luft and Matt Bromiley discuss the Gemini Jack vulnerability, where attackers used prompt injection to turn Google's AI assistant into an unwitting accomplice in data exfiltration. The attack embedded hidden instructions in documents or emails. When employees asked Gemini normal questions like "show me our budgets," the AI retrieved the poisoned document and executed the attacker's commands without anyone clicking anything.