Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The Emerging Security Risks of Agentic AI

AI is moving fast. But the transition from GenAI tools that respond to prompts to AI agents that execute workflows represents something qualitatively different for security leaders. The shift goes beyond just scale, and is a fundamental change in how data moves, who touches it, and what decisions get made, often without human review.

How Adaptive Email Security Helps Navigate Threats in the Age of AI

A finance employee receives an email that appears to come from the CFO requesting urgent payment approval. The message references a current project, uses the correct tone, and arrives at a plausible time. However, the email wasn’t written by a colleague — it was generated by AI. And it contains a malicious link. These attacks are becoming more common as threat actors use AI to produce convincing phishing emails, automate impersonation attempts, and launch social engineering campaigns at scale.

How AI Dash Cams are Revolutionizing Fleet Safety in 2026

Road safety has changed a lot in the last few years. Trucks and vans now carry smart sensors that watch the road better than humans. This shift protects drivers and other people on the street. Managers can see what is happening in the cab and on the street at the same time - this new tech keeps drivers safe. It provides a clear view of daily operations. The data helps businesses save money and stay on schedule.

Securing Agentic AI: Why Visibility, Behavior, and Guardrails Matter

Agentic AI is quickly transitioning from experimentation to production. Enterprises are deploying AI agents to interpret goals, decide what actions to take, interact with business tools and APIs, and execute those actions autonomously, with limited or no human oversight. The promise is speed and efficiency, but the proverbial “blast radius” is bigger and fundamentally different from anything security teams have managed before.

Why Your AI Workflow Should Never Depend on a Single Model

Network engineers have long understood redundancy. Redundant power, redundant links, redundant clusters. The reasoning is simple: any single component that can fail, will. But AI introduces a category of failure that most infrastructure teams have not yet built defenses against. Unlike hardware, AI models can become unavailable for reasons entirely outside your organization's control.

Setting a Higher Standard for Security Outcomes in the AI Era

Customers do not experience AI as architecture. They experience it as outcomes. They experience it in the quality of the signal they receive, the speed of the investigation, the confidence behind the recommendation, and the amount of time their teams can spend being proactive instead of buried in noise. That is why the most important question in cybersecurity today is not whether a vendor has AI. It is whether that AI produces better outcomes. Security teams are not buying AI for its own sake.

Episode 11 - The AI Maturity Journey: Data, Agents, and the Shift from Craft to Art

Richard Bejtlich talks with Vijit Nair, VP of Product at Corelight, about the evolving "AI Maturity Journey" for modern security teams. Vijit outlines a three-level spectrum of AI adoption, moving from basic human-driven assistance to automated swarms of agents, and eventually toward fully autonomous systems. They discuss why high-quality, unopinionated data remains the essential foundation for building trust in AI and how technologies like the Model Context Protocol (MCP) are turning human language into the primary interface for tool integration.

Four Excuses That Are Leaving Your Data Exposed to AI Risk

The generative AI revolution isn't on the horizon. It's already reshaping the way your employees work. Across every industry, workers are adopting AI-powered productivity tools at a pace that far outstrips most organizations' security and governance programs. The question is no longer whether your organization will use AI, but whether you're prepared to use it securely. The challenge is real, but so are the misconceptions that keep organizations from taking action.

How 1Password is building a culture of AI fluency through AI champions

If 2025 was the year of AI adoption, 2026 is when AI evolves from a software story to a people story. Katya Laviolette, our Chief People Officer, explored this idea in a recent Forbes article about how 1Password’s internal network of AI Champions is shaping this evolution and helping us set the standard for how we use AI to drive impact across 1Password.