Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

CISOs Missing the Real AI Threat #podcast #aisecurity

This episode looks at what happens when AI starts finding vulnerabilities at scale, restricted access creates market imbalance, and security teams struggle to keep pace. It covers fragile infrastructure, bug brokers, overloaded analysts, CISO fear, and the growing sense that cyber defence is entering a faster and harsher era.

Cloudflare Just Shipped 20+ Features for AI Agents in One Week

The conversation explores why the Internet and the cloud were not designed for an AI-agent world, and what infrastructure needs to change as software agents begin generating code, running workflows, and interacting directly with online services. Ming and Anni walk through several announcements from Cloudflare’s Agents Week, including new tools for agent infrastructure, memory, developer workflows, AI Gateway, email, artifacts, browser automation, security, and agent-ready websites.

VibeScamming: Why AI-built scams are changing phishing risk

VibeScamming refers to AI-assisted phishing operations where attackers use natural-language tools to rapidly generate and modify phishing content and web pages, lowering (but not eliminating) the technical skill required. One of the primary enterprise impacts is faster phishing iteration and reconstitution after blocks or takedowns, with identity compromise remaining a major risk alongside malware and other payload-based attacks.

Building Know Your Agent: The missing identity layer for agentic commerce

AI agents are being deployed in the real world at pace. In the enterprise realm, they’re accessing APIs, shipping code, and running decisioning workflows on behalf of the organizations and individuals who deploy them. Entirely new businesses have sprung up, leveraging AI agents to streamline customer support and sales processes.

LimaCharlie is the most secure way to run AI security agents

The idea that AI agents will run security operations is becoming reality. But most platforms ignore the most important question: how do you secure the agents themselves? In this video I walk through why LimaCharlie is the most secure platform for running agentic security operations and demonstrate the architectural controls that make it possible. We look at the core mechanisms that allow AI agents to operate safely inside a SecOps environment, including.

Agentic AI at risk after MCP design flaw discovery? #ai #cybersecurity #podcast

In this week's Intel Chat, Chris Luft and Matt Bromiley discuss a design flaw in Anthropic's Model Context Protocol (MCP) that could enable large-scale supply chain attacks on agentic AI systems. Researchers at OX Security found that MCP's command execution allows malicious commands to run silently without sanitization checks or warnings.

Shift-Left Testing Only Works If Your Tests Are Trustworthy

Shift-left has become the standard answer to the quality and security problems that accumulate when testing happens late. Move testing earlier. Catch defects in development, not in production. Run security checks in the pipeline, not in a post-release audit. The principle is sound. The execution is where most teams run into trouble.

Why AI Security Needs More Than One Tool #shorts #ai

Why AI security needs more than one tool Most teams believe a single cybersecurity tool—like WAF, EDR, or API security—is enough to protect their AI systems. But that approach is outdated. AI security is not one layer—it’s a full stack problem. Discovery – Identify Shadow AI and unknown AI usage Build-Time Security – Prevent data poisoning & model risks (MLSecOps) Runtime Security – Stop real-time AI attacks and agent misuse Governance (AISPM) – Ensure visibility, compliance, and policy control.

AI Penetration Testing: Protecting LLMs From Cyber Attacks

88% of organizations now regularly use artificial intelligence (AI) in at least one business function. While adoption of AI technologies has accelerated rapidly, security measures often lag. The rush to roll out AI has, in many cases, overshadowed essential testing and safety protocols. This is particularly a worry when AI and Large Language Models (LLMs) become deeply embedded within organizational workflows and systems in a way that most software isn’t.

Introducing Decipio: A Community Tool to Catch Credential Theft in the Act with Defense First AI

Today, Arctic Wolf is announcing Decipio, a new community‑shared cybersecurity tool designed to help defenders catch attackers while they’re trying to steal credentials inside a network. Credential theft is one of the most common ways cyber attacks begin and one of the hardest to detect early. In many cases, there’s no alert, no obvious warning, and no immediate sign that anything is wrong.