Why You Can't Defend Against Prompt Injection
Prompt injection works because language models struggle to tell the difference between trusted instructions and untrusted user content. Unlike SQL injection or cross site scripting, there is no clean deterministic defence, which leaves code, libraries and AI workflows open to manipulation at multiple points. #razorwirepodcast #cybersecurity #promptinjection #llmsecurity #aisecurity
⸻
For more information about us or if you have any questions you would like us to discuss email podcast@razorthorn.com.
If you need consultation, visit (https://www.razorthorn.com). We give our clients a personalised, integrated approach to information security, driven by our belief in quality and discretion.
⸻
Follow us online:
LinkedIn: (https://www.linkedin.com/company/razorthorn-security)
YouTube: (https://www.youtube.com/c/RazorthornSecurity)
TikTok: (https://www.tiktok.com/@razorwire.podcast)
Instagram: (https://www.instagram.com/razorwire.podcast)
X: (https://x.com/RazorThornLTD)
Website: (https://www.razorthorn.com)