Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Why General SOCs Fail Against AI Threats: The Power of Specialization

Why General SOCs Fail Against AI Threats: The Power of Specialization In this clip from the A10 Networks discussion "APIs are the Language of AI Protecting them is Critical," security experts Jamison Utter and Carlo Alpuerto explore the critical role of specialization in modern security operations.

APIs are the Language of AI. Protecting them is Critical.

APIs are the Language of AI. Protecting them is Critical. In this discussion, A10 Networks security experts Jamison Utter and Carlo Alpuerto explore the emerging impact of Agentic AI on the API security landscape. They delve into how AI agents, as new API consumers, are driving an explosion in endpoints and exacerbating existing security issues, pushing API protection higher up the security practitioners' priority list.

Fixing Shadow APIs: Why True Remediation is Critical in the Age of AI

Fixing Shadow APIs: Why True Remediation is Critical in the Age of AI Agentic AI is fundamentally changing the security landscape, transforming how we think about API protection. In this insightful discussion, A10 Networks security experts Jamison Utter and Carlo Alpuerto dive deep into the challenges presented by this new wave of automation and API consumers.

Multimodal Attacks and Model Drift: The Future of AI Exploitation

Multimodal Attacks and Model Drift: The Future of AI Exploitation A10 security experts Jamison Utter, Diptanshu Purwar, and Madhav Aggarwal discuss the critical vulnerabilities emerging from multimodal AI agents (systems that perceive, decide, and act) and the absolute need for security mechanisms external to the Large Language Model (LLM) itself. The experts dive into why traditional security is failing and what the next evolution of defense must look like.

Invisible Instructions: Multimodal AI is Already Being Tricked

Invisible Instructions: Multimodal AI is Already Being Tricked In this clip from "Securing AI Part 4: The Rising Threat of Hidden Attacks in Multimodal AI," Diptanshu Purwar and Madhav Aggarwal respond to Jamison Utter's example of a new, well-known form of multimodal attack: abusing AI resume screeners by exploiting both text and visual processing. The Resume Attack: White-on-White Text.

Securing AI: Why Vision Models Struggle with Transparency and Depth

Securing AI: Why Vision Models Struggle with Transparency and Depth In this clip from "Securing AI, Part 4," A10 security expert Madhav Aggarwal highlights a fundamental challenge still faced by even the most popular AI vision models and chatbots: transparent objects. Madhav explains how these corner cases—situations that are obvious to a human but complex for a machine—can easily throw an AI model "completely off.".

From Model Drift to API Exploitation: The Next Challenge in AI Security

From Model Drift to API Exploitation: The Next Challenge in AI Security In this clip from "Securing AI Part 4: The Rising Threat of Hidden Attacks in Multimodal AI," Diptanshu Purwar and Madhav Aggarwal summarize why external guardrails are the only sustainable defense against the new wave of AI exploitation. Jamison Utter then sets the stage for the next topic in the series: securing the fundamental protocols and APIs that AI agents rely on.