Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Secret Scanning For AI Coding Tools With ggshield

Introducing ggshield AI hooks from GitGuardian to help stop AI coding assistants from leaking secrets. See how ggshield can scan prompts, tool calls, file reads, MCP calls, and tool output inside AI coding tools like Cursor, Claude Code, and VS Code with GitHub Copilot. When a secret is detected, ggshield can block the action before sensitive data is sent or exposed. You will also see how simple the setup is, with flexible install options for local or global use. This adds practical guardrails to AI-assisted development and helps teams move fast without increasing secret sprawl.

Trivy/LiteLLM Breach: How to Identify Your Exposure and Contain It - 20-min Live Demo

In this 20-minute live demo with Eric Fourrier (CEO and Founder of GitGuardian), Guillaume Valadon (Staff Cybersecurity Researcher at GitGuardian), & Dwayne McDaniel (Principal Developer Advocate at GitGuardian), you'll see how to determine if your machines were compromised by the ongoing Trivy and LiteLLM supply chain attack (attributed to TeamPCP), then scan for exposed secrets and get moving on remediation - step by step.

Trivy's March Supply Chain Attack Shows Where Secret Exposure Hurts Most

The Trivy story is moving quickly, and the latest reporting makes one thing clear: this is no longer just a GitHub Actions tag hijack. What started as a compromise of trivy-action, setup-trivy, and the v0.69.4 release has expanded into malicious Docker Hub images.