Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The Mythical 1+1=3 Model in Cybersecurity

The mythical 1+1=3 model in security? It happens when the tools you already own stop working in isolation — and start working as a system. Jay Wilson and Garrett Hamilton dig into why Reach’s platform approach matters: not just enhancing individual controls, but creating compounding value across identity, endpoint, email, and network. When visibility, configuration, and enforcement align, the outcome isn’t incremental — it’s exponential.

APIs are the Language of AI. Protecting them is Critical.

APIs are the Language of AI. Protecting them is Critical. In this discussion, A10 Networks security experts Jamison Utter and Carlo Alpuerto explore the emerging impact of Agentic AI on the API security landscape. They delve into how AI agents, as new API consumers, are driving an explosion in endpoints and exacerbating existing security issues, pushing API protection higher up the security practitioners' priority list.

Indirect Prompt Injection Attacks: A Lurking Risk to AI Systems

The rapid adoption of AI has introduced a new, semantic attack vector that many organizations are ill-prepared to defend against: prompt injection. While many security teams understand the threat of direct prompt injection attacks against AI agents developed by their organizations, another more subtle threat lurks in the shadows: indirect prompt injection attacks.

Why AI Security Requires Context: Introducing Issues & the Correlation Agent

Data is never the problem. Security teams rarely complain about having too much of it. The real danger comes from data that sits unconnected and unexplained. What teams actually need is data that is actionable and converges into meaning. Data that cuts deeper than surface level signals. Data that reveals what is unfolding and what needs to happen next.

The AI Answering Service Wake Up Call for Modern Businesses

Here is the 4 a.m. question almost no owner says out loud. What actually happens when a real customer calls your business and you do not pick up? You assume it is not that bad and maybe they leave a voicemail. You think maybe they call back or your AI answering service catches it next time. The truth from the inside is uglier and I should know because I helped build the systems that make those calls disappear. I have watched good businesses bleed out through their phone lines while everyone stares at the wrong metrics.

The Shadow AI reality: Inside Cato's survey results

AI tools have proved their worth in the workplace. They help us write, research, code, plan, and automate. They’re making employees faster and more productive, and helping businesses move and innovate at a pace that wasn’t possible before. But AI’s rise wasn’t orchestrated by IT. It didn’t always arrive through formal adoption plans or procurement cycles. It turned up in shared links to popular GenAI and other tools, self-sanctioned and adopted by users in minutes.

Inside the Agent Stack: Securing Agents in Amazon Bedrock AgentCore

In the first installment of our Inside the Agent Stack series, we examined the design and security posture of agents built with Azure Foundry. Continuing the series, we now focus on Amazon Bedrock AgentCore, a managed service for building, deploying, and orchestrating AI agents on AWS.

Securing the New AI Edge: Why Salt Security Is Bringing MCP Protection to AWS WAF

The definition of the "edge" is changing. For years, security teams have focused on the traditional perimeter: web applications, public APIs, and user interfaces. We built firewalls, deployed WAFs, and established strict access controls to keep bad actors out. But with the rapid adoption of Agentic AI, the perimeter has expanded. Today, your "edge" isn't just where users connect to your apps; it's where AI agents connect to your data.