Hallucinated Packages, Malicious AI Models, and Insecure AI-Generated Code
AI promises many advantages when it comes to application development. But it’s also giving threat actors plenty of advantages, too. It’s always important to remember that AI models can produce a lot of garbage that is really convincing—and so can attackers. “Dark” AI models can be used to purposely write malicious code, but in this blog, we’ll discuss three other distinct ways using AI models can lead to attacks.