Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The Legitimate Bot Traffic Security Teams Can No Longer Overlook

Security teams have spent years refining their ability to detect and stop malicious bots. That work remains critical. Automated traffic now accounts for more than half of all web traffic, according to Imperva's 2025 Bad Bot Report. What has changed is the scale and influence of legitimate bots and the blind spots they introduce into modern security programs.

Exabeam Introduces First Connected System for AI Agent Behavior Analytics and AI Security Posture Insight

Industry leadership expanded with connected capabilities that not only uncover AI agent activity, but centralize investigation, and deliver measurable AI security posture insights.

When Seeing Isn't Believing: AI Images, Breaking News and the New Misinformation Playbook

In the early hours following reports of a U.S. military operation involving Venezuela, social media feeds were flooded with dramatic images and videos that appeared to show the capture of Venezuelan president Nicolás Maduro. Within minutes, AI-generated photos of Maduro being escorted by U.S. law enforcement, scenes of missiles striking Caracas, and crowds celebrating in the streets racked up millions of views across various social media channels. The problem?

AI-Enabled Cyber Intrusions: What Two Recent Incidents Reveal for Corporate Counsel

This article was authored by Daniel Ilan, Rahul Mukhi, Prudence Buckland, and Melissa Faragasso from Cleary Gottlieb, and Brian Lichter and Elijah Seymour from Stroz Friedberg, a LevelBlue company. Recent disclosures by Anthropic and OpenAI highlight a pivotal shift in the cyber threat landscape: AI is no longer merely a tool that aids attackers, in some cases, it has become the attacker itself.

Why AI security looks different across the UK, France, Germany, and Australia

Globally, 88% of companies regularly use AI in at least one business function—a 10% increase from the previous year. But as organizations race to adopt new capabilities, we’ve found that the rigor and maturity of AI governance vary widely by region. ‍ The third edition of our State of Trust report reveals how leading AI adopters outside the U.S.—from the UK to Germany, France, and Australia—are approaching AI security and governance in distinct ways.

The Silent Threat to the Agentic Enterprise: Why BOLA is the #1 Risk for AI Agents

In the race to deploy autonomous AI agents, organizations are inadvertently building on a foundation of shifting sand. While security teams have spent the last year focused on "Prompt Injection" and "Model Poisoning," a much older, more dangerous adversary has quietly become the primary attack vector for the agentic era: Broken Object Level Authorization (BOLA).

Model Context Protocol Server: The Universal Remote for AI Agents

The Model Context Protocol (MCP) is emerging as a foundational interoperability layer for agentic AI, embraced by major platform providers. MCP simplifies how AI models connect to external tools and data. Think of it as a universal remote for security platforms: Instead of building fragile, one-off integrations, MCP allows AI to discover and use capabilities dynamically. For SIEM and detection providers, this shift is significant.

Will AI agents 'get real' in 2026?

In my house, we consume a lot of AI research. We also watch a lot—probably too much—TV. Late in 2025, those worlds collided when the AI giant Anthropic was featured on “60 Minutes.” My husband tried to scroll past it, but I snatched the controller away, unable to resist a headline calling out the first widely acknowledged case of an “agentic AI cyberattack.” The framing itself was irresistible, a milestone moment in the rapid acceleration of AI.

Agentic AI Security: How Microsoft Prevents Autonomous Agent Attacks?

As agentic AI systems move into the mainstream—powered by tool calling, MCP, and autonomous workflows—security is no longer a “nice to have.” It’s mission-critical. In this episode, we sit down with Raji, Principal Engineer & Manager for AI and Safety at Microsoft, to deep-dive into the rapidly evolving world of AI security, autonomous agents, and enterprise governance. Discover how Microsoft identifies and mitigates risks in agentic AI, distinguishes AI Security vs AI Safety, and enables organizations to deploy autonomous systems safely at scale—without slowing innovation.