Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The Swiss Cheese Model of AI Security

The Swiss Cheese Model of AI Security A10 Networks' security experts, Jamison Utter, Madhav Aggarwal, and Diptanshu Purwar, explain that adequate AI security isn't a one-size-fits-all solution. They introduce the concept that security controls must be tailored to your specific data, company, and industry, as every context is unique.

Understanding Bias in Generative AI: Types, Causes & Consequences

Bias in generative AI refers to the systematic errors or distortions in the information produced by generative AI models, which can lead to unfair or discriminatory outcomes. These models, trained on vast datasets from the internet, often inherit and amplify the biases present in the data, mirroring societal prejudices and inequities.

Seven ways AI could impact the future of pen testing

In an era where attack surfaces are expanding faster than ever, AI has the potential to transform how organizations find and fix vulnerabilities. Gartner estimates AI agents will reduce the time it takes to exploit account vulnerabilities by 50%. From automating routine scans to developing self-learning attack agents, AI is already changing the red team playbook – and the pace of innovation shows no signs of slowing.

The Data Problem: Why LLM Security Is So Complex

The Data Problem: Why LLM Security Is So Complex Large language models are trained on terabytes of data, but what happens when that data is flawed? In this video, A10 Networks' security experts, Jamison Utter, Madhav Aggarwal, and Diptanshu Purwar, discuss a critical, often-overlooked aspect of AI security: the training data itself. They explain that LLMs are inseparable from the data they're trained on, which means if the data contains biases, toxic content, or other vulnerabilities, those flaws are vulnerable to exploitation by attackers.