I Tried 5 Prompt Injection Attacks (Here's What Happened)
In this video, we explore the growing security risk of prompt injection in large language model (LLM) applications. As AI becomes embedded in more products, new vulnerabilities emerge, especially through natural language manipulation.
We break down how LLMs work, the importance of system prompts, and demonstrate five real-world prompt injection techniques used to extract sensitive information or bypass safeguards. You’ll see live examples using different models and learn why newer models are more resilient, but still not immune.
If you're building or using AI-powered applications, this is essential knowledge to help you understand and mitigate risks.
Use Snyk for free to find and fix security issues in your applications today! https://snyk.co/ugLYn
⏲️ Chapters ⏲️
00:00 Introduction: Prompt Injection Example (Batman Prompt)
00:27 Why Prompt Injection Matters in Modern Apps
00:54 How LLMs Work (Statelessness, Memory, System Prompts)
01:41 Importance of System Messages & Instruction Hierarchy
02:30 Attempt 1: Direct Instruction Override (Older vs Newer Models)
05:15 Attempt 2: Structured Output / JSON Schema Attack
06:28 Attempt 3: Role Playing Exploit
07:24 Combining Role Play + Structured Attacks
08:54 Attempt 4: Multi-turn Manipulation Attack
11:06 Attempt 5: Payload Splitting (Single Prompt Attack)
12:23 Recap of All 5 Prompt Injection Techniques
12:42 Final Thoughts & Future Techniques
⚒️ About Snyk ⚒️
Snyk helps you find and fix vulnerabilities in your code, open-source dependencies, containers, infrastructure-as-code, software pipelines, IDEs, and more! Move fast, stay secure.
Learn more about Snyk: https://snyk.co/ugLYl
📱 Connect with Us 📱
🖥️ Website: https://snyk.co/ugLYl
🐦 X: http://twitter.com/snyksec
💼 LinkedIn: https://www.linkedin.com/company/snyk
💬 Discord: https://discord.gg/devsecops-community-918181751526948884
- ️ Subscribe: https://www.youtube.com/c/SnykSec
- 🔥 We're hiring! Check our open roles: https://snyk.co/ugLYp
🔗 Hashtags 🔗
#promptinjection #llm #llmsecurity #aisecurity