Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

INETCO's Bijan Sanii on Conversations Live: 'Cybersecurity is an arms race. AI today, quantum tomorrow'

At the recent Conversations Live with Stuart McNish panel on cybersecurity — part of the thoughtful public affairs dialogue series produced in partnership with the Vancouver Sun — industry leaders gathered to unpack the real-world risks shaping organizational resilience and national security. The event, held on Dec. 10, 2025, brought together experts from across the cybersecurity landscape to go beyond headlines and explore strategies for responding to evolving threats.

How security leaders can safely and effectively implement agentic AI

2025 began with experts warning about the dangers of agentic AI use—but that didn’t slow adoption. Our annual State of Trust Report shows that nearly 80% of organizations are either actively using or planning to use agentic AI. That acceleration is outpacing the governance required to keep these systems safe: ‍ ‍ A level of machine autonomy that would’ve been unthinkable just a few years ago is quickly becoming normalized.

Stop Feeding Logs to LLMs: A Multi-Agent Approach to Security Investigation

See how Torq harnesses AI in your SOC to detect, prioritize, and respond to threats faster. Request a Demo Noam Cohen is a serial entrepreneur building seriously cool data and AI companies since 2018. Noam’s insights are informed by a unique combination of data, product, and AI expertise — with a background that includes winning the Israel Defense Prize for his work in leveraging data to predict terror attacks.

Secure AI Agent Infrastructure with Zero-Code MCP

Learn how to secure AI and MCP infrastructure without writing authorization code, rewriting MCP servers, or limiting agent work with Teleport’s zero-code MCP integration. AI agents are becoming powerful participants in engineering workflows. But without meaningful authorization boundaries, they can quickly become an existential security risk. AI agents do not behave like traditional applications. Instead, they generate actions and chain together tools in unpredictable ways.

Proactive WAF Vulnerability Protection & Firewall for AI + Multiplayer Chess Demo in ChatGPT

In this episode of This Week in NET, we talk with Daniele Molteni, Director of Product Management for Cloudflare’s WAF, about how Cloudflare responded within hours to a newly disclosed React Server Components vulnerability — deploying global protection before the public advisory was even released.

Questions to ask before vetting an AI agent for your SOC

So you’re ready to “hire” an agent or two for security operations. While AI agents won’t replace your human analysts, they are quickly becoming indispensable team members. Choosing the right ones should resemble a typical hiring process: you need to determine if they possess the necessary skills to fill your team’s gaps, work effectively with others, and grow with your organization. Here are five questions worth asking before you bring an AI agent on board in your SOC.

Why "We Thought It Was On" Keeps Leading to Breaches

At UC Irvine’s Digital Leadership Agenda 2026, moderated by Nicole Perlroth, Garrett Hamilton illustrates what those blind spots can look like: “We believed it was deployed.”“It was turned on.”“It should have stopped this.” Except one exception, one policy gap, one control not applied at scale — and assumptions replace reality. The real problem isn’t visibility. It’s continuously validating intent against execution.

Misconfigurations Are Still Owning Security Teams

Garrett Hamilton sat down with Todd Graham, Managing Partner at Microsoft’s venture fund, M12, to talk about why M12 invested in Reach and why our mission was a no-brainer for him. Nation-state attacks make the headlines—but most people are getting owned by misconfigured servers, networks, and controls hiding in plain sight. Turns out the problem isn’t what teams don’t own. It’s what they do own that isn’t, in most cases, even turned on.

Are LLMs becoming messengers for attackers? #ai #cybersecurity

AI assistants with broad enterprise access are creating a new attack vector. Chris Luft and Matt Bromiley discuss the Gemini Jack vulnerability, where attackers used prompt injection to turn Google's AI assistant into an unwitting accomplice in data exfiltration. The attack embedded hidden instructions in documents or emails. When employees asked Gemini normal questions like "show me our budgets," the AI retrieved the poisoned document and executed the attacker's commands without anyone clicking anything.

Security Visionaries | Agentic AI Threats: Hype or Reality

Are agentic AI threats just hype or reality? Security Visionaries host Max Havey digs into the world of agentic AI-enabled threats and cyber espionage with guests Neil Thacker, Global Privacy and Data Protection Officer at Netskope, and Ray Canzanese, Head of Netskope Threat Labs. IN THIS EPISODE.