The Data Problem: Why LLM Security Is So Complex
The Data Problem: Why LLM Security Is So Complex Large language models are trained on terabytes of data, but what happens when that data is flawed? In this video, A10 Networks' security experts, Jamison Utter, Madhav Aggarwal, and Diptanshu Purwar, discuss a critical, often-overlooked aspect of AI security: the training data itself. They explain that LLMs are inseparable from the data they're trained on, which means if the data contains biases, toxic content, or other vulnerabilities, those flaws are vulnerable to exploitation by attackers.