Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI vs. Legacy Systems: Why Your Old Documentation Isn't Safe

AI vs. Legacy Systems: Why Your Old Documentation Isn’t Safe In this A10 Networks discussion, "APIs are the Language of AI: Protecting Them is Critical," security experts Jamison Utter and Carlo Alpuerto tackle one of the most overlooked threats in modern IT: the "specter in the shadow." For years, many organizations relied on "security through obscurity"—the idea that if a system is old or undocumented, it's safe from attackers. However, AI has changed the rules. These systems can now decipher legacy documentation and communicate with obscure systems faster than a human ever could.

The Architecture of Agentic Defense: Inside the Falcon Platform

The architectural divide in cybersecurity is no longer theoretical. It's operational. Adversaries are deploying AI-accelerated attacks and moving laterally across domains faster than human analysts can correlate evidence. Meanwhile, defenders are adopting AI tools that accelerate individual tasks but still operate on fragmented data and require manual correlation across disconnected systems.

What's shaping the AI agent security market in 2026

For the past two years, AI agents have dominated boardroom conversations, product roadmaps, and investor decks. Companies made bold promises, tested early prototypes, and poured resources into innovation, with analysts projecting an economic impact of $2.6 trillion to $4.4 trillion. As 2026 begins, the experimentation phase ends and the production era starts as organizations roll out AI agents at scale across their enterprises.

Egnyte Joins Anthropic to Bring Secure, Responsible AI to Financial Services

Egnyte is proud to partner with Anthropic in the next phase of Claude for Financial Services—making it easier than ever for sales, investment, and compliance teams to bring their content, context, and institutional knowledge directly to Claude with governed, secure access. As financial institutions race to unlock insights from decades of documents, models, and market data, the challenge has never been simply access.

AppGuard Critiques AI Hyped Defenses; Expands its Insider Release for its Next-Generation Platform

A new Top 10 Cybersecurity Innovators profile by AppGuard has been released, spotlighting growing concerns over AI-enhanced malware. AI makes malware even more difficult to detect. Worse, they use AI to assess, adapt, and move faster than any cyber stack can keep up. The report advocates for a fundamental change in approach, highlighting the limitations of reactive security measures. Rather than constantly adding or changing detection layers of cyber stacks, the profile emphasizes the importance of reducing endpoint attack surface-a perspective that challenges conventional industry practices.

Introducing your AI interaction layer

AI is everywhere, but without a consistent and secure way to connect it to real systems, it remains fragmented, difficult to govern, and hard to scale. Today, we’re introducing your AI interaction layer. Tines unifies AI agents, copilots, and Model Context Protocol (MCP) servers and clients in a single, secure environment. It gives teams a practical way to connect AI to systems and put it to work seamlessly across operations.

LLM Security Checklist: Essential Steps for Identifying and Blocking Jailbreak Attempts

If your organization uses a private large language model (LLM), then it’s time to start thinking about countermeasures for jailbreaking. A jailbroken LLM can lead to leaked information, compromised devices, or even a large-scale data breach. Even more troubling: Jailbreaking LLMs is often as simple as feeding them a series of clever prompts. If your customers can access your LLM, your potential risk is even higher.