Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Is AI a cost-effective solution to alert noise? #cybersecurity #AI #SOC #podcast

Security teams are drowning in alerts, and AI might not be the answer everyone thinks it is. In this episode, Erik Bloch, VP of Security at Illumio, breaks down the math on why AI-powered alert triage may be financially unfeasible for most organizations. With 85 to 90 percent of alerts being non-malicious, security teams are still sorting through massive volumes of noise to find the real threats. Many vendors are betting that AI will solve this problem by triaging alerts at scale. But the reality?

How Generative AI is Changing the DLP Landscape

Generative AI has revolutionized productivity, but it has also introduced a new class of data risk that legacy DLP tools simply can’t see. From engineers pasting source code into ChatGPT to marketers rewriting strategy docs, sensitive IP is leaving the browser through "Shadow AI" channels daily. Learn why traditional pattern matching fails against LLMs and how a data lineage approach secures AI usage without halting innovation.

Can Claude Opus 4.5 Build a SECURE Note Taking App?

Can Claude Opus 4.5 actually build a secure, fully functional note-taking app? In this video, I challenge the latest Claude model to create an app with real features — create, edit, update, delete, plus basic security — and see if the code holds up in practice. This is a real test of how far AI can go in building usable software.

Secure your code at scale with AI-driven vulnerability management

As development teams adopt generative AI at an unprecedented pace, security teams face an evolving set of challenges in securing the software development life cycle. The increasing speed and scale of code changes make it more difficult for organizations to manage risk effectively. Legacy scanners often fail to keep up, returning slow results and noisy alerts that increase remediation time and leave organizations exposed to potential breaches.

Zero Trust That Actually Works: How Reach Maps NIST & CISA Frameworks Into Real Security Gains

Most organizations don’t lack intent; they lack a clear understanding of what’s deployed today, what gaps matter most, and how to turn guidance into enforceable baselines. Reach connects to your existing security tools and automatically maps configurations to established maturity models like CISA’s Zero Trust Maturity Model 2.0 — producing a real-time posture assessment across identity, device, endpoint, email, and network with no surveys or guesswork.

Torq HyperAgents: The Next Evolution of Agentic SecOps

See how Torq harnesses AI in your SOC to detect, prioritize, and respond to threats faster. Request a Demo Tal Benyunes was one of the first engineers at Torq and now leads Product for HyperAgents, Torq’s agentic AI initiative. Shaped by early career roles in mission-critical cybersecurity environments and leading companies, Tal brings deep technical expertise and strategic insight to the development of AI Agents.

Release 783 Brings LLM Monitoring, ARM Support, Enhanced Rules, Mac Improvements and More

We are excited to announce Platform Release 783, a massive update with over 470 features and improvements, focusing on adapting to the modern digital workspace by delivering deep visibility, better protection, and higher privacy. Here is a summary of the new features and improvements available in this release. For an extensive list, please refer to the detailed Release Notes.

Say Hello to Ask Pepper AI: Turning API Security into a Conversation

In the world of cybersecurity, we have a "data" problem. We have more of it than ever before, more logs, more alerts, and definitely more APIs. But recently, this challenge has compounded. The rise of Agentic AI and Model Context Protocols (MCPs) has exploded the number of machine-to-machine connections in our environments. These agents spin up new pathways and access data in ways that are often invisible to traditional monitoring.

Navigating AI risks: understanding and mitigating prompt injection

AI is becoming a routine part of technical operations. Teams use models to support ticket triage, incident routing, knowledge retrieval, code analysis, and customer interactions. As these agents move closer to production workflows, the conversation about security becomes much more important. One of the most persistent and widely misunderstood issues is prompt injection. It is not a vulnerability that can be fully patched or trained away.

Modernizing Vendor Risk for the AI Era

See how Riverside County transformed vendor risk from a manual, time-consuming process into a streamlined, data-driven operation that speeds decision-making, reduces risk, and enables innovation. Also hear about their approach to managing emerging AI risks, with practical, actionable lessons other security teams can apply. Interested in finding out more about UpGuard?