Webinar Replay - AI Security Testing: Prompt Injection Everywhere
Kroll’s LLM penetration testing has seen it analyze data sets of OpenAI models, non-public models and RAG systems. It has used this to produce an anonymized dataset that catalogs vulnerabilities from all LLM engagements. Kroll has found a worrying prevalence of prompt injection attacks in the LLM cases it has investigated and shares its findings in this briefing.