Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Claude Mythos Just Killed Exploitability as a Security Signal

The game has changed. For years, security teams used exploitability to decide what to patch first. If a vulnerability had a known exploit, it went to the top of the list. If not, it waited. But with the arrival of next-gen AI models like Claude Mythos, that strategy is officially broken. In this video, we discuss how Claude Mythos has collapsed the barrier to building working exploits. What used to take real skill and significant time can now be weaponized in minutes. When everything is exploitable, exploitability becomes noise.

The Governance Gap: How the EU AI Act Makes API Security a Compliance Imperative

Your legal team just handed you a 400-page document and said "figure out compliance." The EU AI Act is live, your organization falls under its scope, which is broader than many expect. Even non‑EU companies must comply if their AI systems are used, deployed, or produce effects within the European Union. In practice, that means that global organizations building or integrating AI models cannot treat the Act as a regional regulation.

Types of AI Guardrails and When to Use Them (2026)

The types of AI guardrails are input guardrails, output guardrails, security guardrails, ethical guardrails, and operational guardrails, each positioned at a different failure point across an inference pipeline. Gartner’s research found that 30% of generative AI projects don’t survive past the proof-of-concept stage, with weak risk controls cited as the leading reason. Most of those projects weren’t badly built. The models worked. The gaps were in what sat around them.

The Zero-Trust Audit: Protecting Financial Intelligence in the Cloud

Digital finance is shifting away from the old way of securing data. The old method relied on a strong perimeter to keep threats out. Once someone was inside the network, they often had free rein to move around. Cloud systems make that perimeter vanish because data moves between different apps and users constantly.

Claude Mythos Explained: AI Finding Zero-Day Vulnerabilities and Chaining Exploits

Claude Mythos is an AI model capable of finding and chaining zero-day vulnerabilities at scale. That changes how attacks happen, especially in environments where you can’t patch fast enough. The Forescout 4D Platform with VistaroAI helps organizations respond with real-time visibility and dynamic control across all connected devices.

Stopping AI Agent Attacks: How Falcon AIDR Blocks Prompt Injection

See how attackers can exploit AI agents like OpenClaw using hidden prompt injection techniques—and how CrowdStrike Falcon AIDR stops them in real time. In this demo, we show how a seemingly harmless resume contains invisible malicious instructions that trick an AI agent into leaking sensitive data, including API tokens and system access. Then, we replay the same scenario with Falcon AIDR enabled, where the attack is detected and blocked before any damage is done.

Why API Discovery Is the First Step to Securing AI

AI risk doesn’t live in the model. It lives in the APIs behind it. Every AI interaction triggers a chain of API calls across your environment. Many of those APIs aren’t documented or tracked. That’s your real exposure. Shadow API discovery gives you visibility into those hidden endpoints, so you can find them before attackers do. If you don’t know which APIs your AI relies on, you can’t secure the system.

Explainable AI in Email Security: From Black Box to Clarity

Generative AI and sophisticated social engineering have reshaped the cybersecurity landscape in 2026. Traditional "castle-and-moat" defenses centered on the Secure Email Gateway (SEG) are increasingly pressured by machine-scale attacks designed to bypass static filters. As organizations shift toward Integrated Cloud Email Security (ICES) models, a new technical and psychological barrier appears: the "black box" problem of defensive AI.

This Project Glasswing Announcement is Bigger Than You Think

Anthropic's Project Glasswing and Mythos Preview model represent a seismic shift in cybersecurity. This AI is specifically tuned for vulnerability discovery, code review and security hardening at unprecedented speed. In this episode of Razorwire Raw, Jim Rees breaks down what Project Glasswing actually means for information security professionals and the concerns nobody's talking about yet.

Autonomous AI Agents Explained: Risks, Capabilities & Security Gaps

Autonomous AI agents are no longer experimental—they’re writing code, executing commands, and making decisions in real time. But as AI coding agents become more powerful, they’re also introducing a new and often invisible attack surface. In this video, we break down: AI agents can install packages, run scripts, and modify systems instantly—often without traditional visibility. That means security teams need to rethink how they monitor and protect their environments.