Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Your AI Just Became the Insider Threat | CrowdStrike Global Threat Report 2026

Hackers can reach your critical systems in just 27 seconds. In 2025, AI-powered cyberattacks surged 89% as adversaries weaponized the same AI tools organizations use every day. From eCrime groups to China-nexus actors, North Korean operatives, and Russian intelligence, AI is accelerating and reshaping global threat activity. In this video, you’ll learn: Adversaries are not just using AI. They are weaponizing your AI against you.

AI certificate

You can ask AI to create a song that sounds like a famous band sang it. But what happens if you use it or share it? Are there legal or other implications? AI tools must be visible and governed. Shadow AI isn’t. Take Cato’s AI in Cybersecurity course to understand the risks of unsanctioned AI tools. It’s free, comes with a downloadable cert, and earns CPE credits. Register now.

Why AI Features Don't Equal Better Vulnerability Management

AI is becoming table stakes in vulnerability and exposure management. In this candid webinar conversation, Chris Ray, Field CTO at GigaOm, and Will Gorman, CTO and leader of AI initiatives at Nucleus Security, challenge the assumption that more AI automatically leads to better outcomes.

AI Moves Fast, Privacy Has to Move Faster with Ojas Rege

In this episode, Caleb Tolin welcomes Ojas Rege of OneTrust for a practical, wide-ranging conversation on how data privacy and governance must evolve alongside enterprise AI adoption. Ojas explains why AI fundamentally changes the privacy conversation: the same systems that enable organizations to move faster can also cause harm faster when guardrails aren’t in place. From agentic AI systems that dynamically repurpose data to general-purpose models that blur traditional notions of “intended use,” the challenge isn’t just compliance—it’s trust.

AI Compliance: 5 Key Frameworks, Challenges, and Best Practices

AI compliance ensures AI systems follow laws, ethics, and standards by managing risks like bias, privacy violations, and lack of transparency through robust governance, documentation, and continuous monitoring, using frameworks like the EU AI Act and NIST AI Risk Management Framework (RMF) to build trust and avoid penalties in developing, deploying, and operating AI.

What is a Prompt Injection Attack?

AI tools are quickly becoming part of everyday business workflows. From chatbots to automation tools, large language models now handle sensitive tasks and data. But with this growth comes new security risks. One of the biggest emerging threats is the prompt injection attack, in which attackers manipulate inputs to cause AI systems to ignore their original instructions. Unlike traditional cyberattacks, this method exploits weaknesses through language rather than code.

The Machine War: Why MSPs Must Move from AI-Assistance to Autonomy

In 2026, the digital landscape has shifted from a world of "AI assistants" to one of autonomous operators. For managed service providers (MSPs), this evolution marks the end of the traditional "land and expand" human services playbook and the beginning of a high-speed era of machine-on-machine warfare.

The Next Market Disruption: Agentic SOC

Predicting a market disruption is difficult, but the vast rewards of being correct make it worthwhile. Unfortunately, prediction becomes tougher when marketing teams start labelling everything as a "market disruptor". Much like the stock market, if something is being sold to you as “the investment of a lifetime”, it almost certainly is not. Yet market disruptors do exist, and the organizations that identify them enjoy generational success.