Prompt Injection Attacks in LLMs: Complete Guide for 2026
In February 2023, a Stanford University student conducted a study that turned into one of the most widely followed security tests in AI history. Kevin Liu performed a simple prompt-injection attack, tricking Microsoft Bing Chat into disclosing its internal codename, Sydney, and exposing the entire list of its system prompts. The attack utilized no high-end toolkit, no zero-day, and no privileges, only specially crafted natural language.