Data poisoning is a sophisticated adversarial attack designed to manipulate the information used in training artificial intelligence (AI) models. By injecting deceptive or corrupt data, attackers can hurt model performance, introduce biases, or even create security vulnerabilities. As AI models increasingly power critical applications in cybersecurity, healthcare, finance, and many other industries, maintaining the integrity of their training data is absolutely critical.