Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Attacking the MCP Trust Boundary

Every secure API draws a line between code and data. HTTP separates headers from bodies. SQL has prepared statements. Even email distinguishes the envelope from the message. The Model Context Protocol (MCP), the fast-growing standard for connecting AI agents to external services, inherits that gap from the models it sits on top of.

Torq Leads Every Category in the 2026 KuppingerCole Analysts Leadership Compass: Emerging AI SOC

See how Torq harnesses AI in your SOC to detect, prioritize, and respond to threats faster. Request a Demo The security automation market just got its definitive evaluation and its new name. KuppingerCole Analysts is the global analyst firm that sets the benchmark for cybersecurity technology evaluations.

NIST CSF 2.0 and Agentic AI: Building Profiles for Autonomous Systems

AI agents are likely already running inside your infrastructure. They triage alerts, remediate incidents, provision resources, and make decisions without waiting for a human to approve each step. For teams aligned to NIST’s Cybersecurity Framework (CSF) 2.0, this creates a problem: the framework assumes human actors, human-speed decisions, and human-readable audit trails. Autonomous systems break all three assumptions. The good news is that CSF 2.0 was designed to be adapted.

Your auditor is about to ask about AI agents. 9 things they'll want to see

Accelerating security solutions for small businesses‍ Tagore offers strategic services to small businesses. A partnership that can scale‍ Tagore prioritized finding a managed compliance partner with an established product, dedicated support team, and rapid release rate. Standing out from competitors‍ Tagore's partnership with Vanta enhances its strategic focus and deepens client value, creating differentiation in a competitive market. Studies show that AI adoption outpaces understanding.

Why Identity Security is Key To Managing Shadow AI

Employees are adopting Artificial Intelligence (AI) tools to enhance their productivity, but they rarely consider the security implications of doing so. When an employee pastes sensitive customer data into an unapproved AI tool, that data is processed by a third-party model outside the organization’s control, often leaving no audit trail for security teams to review.

Frontier AI Is Collapsing the Exploit Window. Here's How Defenders Must Respond.

The defensive timeline in cybersecurity is changing faster than most organizations are prepared for. For years, defenders operated with an assumption that there would be some delay between vulnerability disclosure and exploitation. That delay created a window for patching, mitigation, and detection. It wasn’t perfect, but it gave security teams time to act. Frontier AI is removing that buffer and changing how organizations must consider cyber risk.

AI Workload Security on GKE: Evaluating Google Cloud Native vs Third-Party Solutions

A CISO running AI agents on GKE has watched three Google product launches in eighteen months — Model Armor, expanded Security Command Center coverage for AI workloads, additions to Chronicle’s curated detection content — and is being asked whether the GCP-native stack is now sufficient. The vendor demos and the Google Cloud blog say yes. The 2 AM analyst experience says something different.

What we learned using AI agents to refactor a monolith

AI agents are increasingly used to refactor large codebases, but many teams lack a clear understanding of where they succeed and where they fail. At 1Password, we applied agentic tooling to a multi-million-line Go monolith, and in this blog we'll share what worked, what broke, and what it means for teams adopting AI in production systems.