Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How GitGuardian and CyberArk MCP Servers Cut Secrets & Vault Sprawl with AI Automation

Watch the teams of GitGuardian and CyberArk for a demo-first session on how MCP (Model Context Protocol) servers can help you tame secrets sprawl and vault sprawl by letting developers use AI to trigger the right actions, with far less cognitive load! What you’ll learn.

What is Generative AI Security? Types, Risks & Best Practices

Generative AI security is the practice of protecting generative artificial intelligence models, applications, and their underlying training data from cyber attacks, data leakage, and unauthorized access. It focuses on securing both sides of the system—i.e., the AI itself (models, pipelines, APIs) and the sensitive data flowing into and out of it during real-world use.

I Tried 5 Prompt Injection Attacks (Here's What Happened)

In this video, we explore the growing security risk of prompt injection in large language model (LLM) applications. As AI becomes embedded in more products, new vulnerabilities emerge, especially through natural language manipulation. We break down how LLMs work, the importance of system prompts, and demonstrate five real-world prompt injection techniques used to extract sensitive information or bypass safeguards. You’ll see live examples using different models and learn why newer models are more resilient, but still not immune.

Real-Time AI Security: Securing Autonomous Agents in 2026

Is your security stack ready for the agentic revolution? As we move into 2026, Real-Time AI Security has become the new frontier for enterprise protection. In this episode of AI on the Edge, Amar (CEO of Protecto) sits down with security veteran and investor Anand Tangiraja to discuss why traditional "shift left" strategies and legacy tools are failing in the face of autonomous agents.

3 Reasons Your Security Can't Stop AI Attacks #shorts #ai

Is your SOC ready for the 10-minute attack? In 2026, traditional Security Operations Centers are failing to stop Agentic AI Attacks. Why? Because agents don't follow the rules of legacy software. In this Short, we break down the three reasons your current defense is obsolete. The 3 Reasons Your SOC is Too Slow.

Claude Code Cuts SOC Setup to 10 Minutes

Security teams accept that standing up a real SOC requires days of configuration, credential wrangling, and infrastructure work before any actual security engineering begins. With LimaCharlie, actual setup time is closer to ten minutes. It gives valuable time back to SecOps teams by managing infrastructure and simplifying onboarding and operations with Claude Code. Using agentic AI to deploy SOC capabilities means your team spends less time on infrastructure and more on security work.

Exposure as a Competency: How Agentic Exposure Management Can Differentiate High-Performing Teams

In today's fast-paced work environment, the factors that distinguish high-performing teams go well beyond technical skills and traditional leadership. Increasingly, organizations are recognizing "exposure" as a critical competency, one that shapes how teams interact with uncertainty, opportunity, and risk. While exposure has historically been viewed through a financial or risk management lens, it is now emerging as a core driver of organizational agility, innovation, and resilience.

Trusted AI Video Platforms for Safer Content Creation

AI-generated video content is growing fast, and so are the risks that come with it. Statista data shows a sharp rise in AI incidents tied to content generation, with deepfakes and rights violations among the most documented concerns. For creators, brands, and marketers, choosing the right AI video platform means thinking beyond output quality.