Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI Agent-to-Agent Communication: The Next Major Attack Surface

We are witnessing the end of the "Human-in-the-Loop" era and the beginning of the "Agent-to-Agent" economy. Until recently, most AI interactions were hub-and-spoke models where a human user prompted a central model, reviewed the output, and then took action. That model provided a natural safety brake. If the AI hallucinated or suggested a malicious action, a human was there to catch it. That safety brake is disappearing.

Exabeam Agent Behavior Analytics: First-of-Its-Kind Behavioral Detections for AI Agents

AI agents are moving into real workflows faster than most teams expected. According to PwC’s 2025 AI Agent Survey, 79% of companies are already adopting AI agents, and 88% of executives expect to increase AI-related budgets in the next year. These agents are now handling research, summarization, customer engagement, and operational tasks at a scale humans can’t match.

0-Click RCE in Claude Desktop: How AI Extensions Threaten Endpoint Security

The modern enterprise software ecosystem increasingly relies on desktop AI applications enhanced through extensible plugin or extension frameworks. These extensions are designed to improve productivity by enabling integrations with local files, browsers, APIs, developer tools, and internal systems. However, this same extensibility introduces a high-risk attack surface when extension permissions, sandboxing, and input validation are weakly enforced.

Using SSL Inspection and AI Guardrails to Protect Infrastructure

Using SSL Inspection and AI Guardrails to Protect Infrastructure How do you protect your AI infrastructure from threats without impacting user experience? In this video, we'll cover the methods organizations can use to inspect encrypted traffic, including what is sent to AI chatbots, and add guardrails to protect against security risks. We'll cover.

How do AI guardrails protect infrastructure from the unsafe and unpredictable territory of LLM risks

How do AI guardrails protect infrastructure from the unsafe and unpredictable territory of LLM risks? An AI firewall or guardrail device sits between your applications and large language models to keep the data sent and received from LLMs safe, compliant, and high-quality. Its design is to inspect natural-language traffic and protect your infrastructure against LMM vulnerabilities, including prompt injection, jailbreak attacks, data poisoning, system prompt leakage, and OWASP Top 10 vulnerabilities, using advanced, proprietary reasoning models.

A January Snapshot: Real-World AI Usage

AI is no longer a fringe productivity experiment inside organisations, it is embedded, habitual, and increasingly invisible. This snapshot from CultureAI’s January usage data highlights how AI is actually being used across everyday workflows, and where risk is forming as a result. Rather than focusing on hypothetical threats or model-level concerns, the findings below surface behavioural signals from real interactions: prompts, file uploads, and context accumulation.

The Human-AI Alliance in Security Operations

Picture a SOC analyst starting an investigation. A suspicious spike in authentication activity appears on their dashboard, and they need to understand what’s happening quickly. To do that, they move through a familiar sequence of tools. What begins as a single investigation quickly turns into a chain of context switches: That’s nine steps to investigate one event. This isn’t accidental. Security tools have evolved to solve isolated problems, but together they have created fragmentation.

Inside the Human-AI Feedback Loop Powering CrowdStrike's Agentic Security

Adversaries are continuously evolving their tactics, techniques, and procedures to evade both legacy and AI-native defenses, and they’re using AI to their advantage. Stopping them requires a new approach: humans and AI working together. While AI can correlate massive volumes of telemetry at machine speed, pattern recognition alone is not enough to stop modern attacks. Training on detections teaches models what happened, but not why it mattered.