Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Secure by Default: Why Snyk and Augment Code are the New Standard for AI Development

AI coding assistants have fundamentally changed development velocity. With tools like Augment Code, developers can now build and iterate at a pace that was unimaginable just a few years ago. However, this explosion in speed has created a new challenge: security teams, often still relying on manual review processes, are becoming the bottleneck.

AI Compliance Training: EU AI Act & 90-Day Implementation Strategy

Executive Summary: A technical briefing on navigating the AI compliance landscape, focusing on the EU AI Act, US federal mandates, and state-level regulations. This session provides a structured 90-day roadmap for AI system governance, risk mitigation, and role-based training deployment. Key Knowledge Domains.

Cato's ASK AI Assistant: Turning Complex Network Operations Into Simple Conversations

Every superhero needs a sidekick. For your network and security teams, that is Cato’s ASK AI Assistant, our new AI Assistant built to help you see, solve, and secure faster than ever. This isn’t a basic Q&A tool. It brings customer-specific information and ability to work with other tools to answer complex questions.

Why AI Transformations in Security Fail Like New Year's Gym Resolutions

Enterprise AI adoption moved fast. Speed mattered. Shipping mattered. Getting AI into production mattered. That phase is over. Security leaders are now asking a harder question: whether the AI already embedded in security operations is safe, explainable, and aligned with how modern SOC teams actually work. The focus has shifted from adoption to trust, specifically explainability, governance, and operational fit.

AI Automation for MSPs: Boost Productivity, Cut Costs, and Improve Service Quality

AI automation for managed service providers is creating a major shift from reactive to proactive service delivery, allowing MSPs to streamline ticket handling, accelerate resolution times, and operate far more efficiently. Real-world data shows that AI-driven automation can help service desks close significantly more tickets per technician by automating triage and routine tasks, while also reducing operational costs by 25 to 40 percent** through improved workflow efficiency and reduced manual labor.

The Legitimate Bot Traffic Security Teams Can No Longer Overlook

Security teams have spent years refining their ability to detect and stop malicious bots. That work remains critical. Automated traffic now accounts for more than half of all web traffic, according to Imperva's 2025 Bad Bot Report. What has changed is the scale and influence of legitimate bots and the blind spots they introduce into modern security programs.

Exabeam Introduces First Connected System for AI Agent Behavior Analytics and AI Security Posture Insight

Industry leadership expanded with connected capabilities that not only uncover AI agent activity, but centralize investigation, and deliver measurable AI security posture insights.

The Silent Threat to the Agentic Enterprise: Why BOLA is the #1 Risk for AI Agents

In the race to deploy autonomous AI agents, organizations are inadvertently building on a foundation of shifting sand. While security teams have spent the last year focused on "Prompt Injection" and "Model Poisoning," a much older, more dangerous adversary has quietly become the primary attack vector for the agentic era: Broken Object Level Authorization (BOLA).

Model Context Protocol Server: The Universal Remote for AI Agents

The Model Context Protocol (MCP) is emerging as a foundational interoperability layer for agentic AI, embraced by major platform providers. MCP simplifies how AI models connect to external tools and data. Think of it as a universal remote for security platforms: Instead of building fragile, one-off integrations, MCP allows AI to discover and use capabilities dynamically. For SIEM and detection providers, this shift is significant.