Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Secret Scanning For AI Coding Tools With ggshield

Introducing ggshield AI hooks from GitGuardian to help stop AI coding assistants from leaking secrets. See how ggshield can scan prompts, tool calls, file reads, MCP calls, and tool output inside AI coding tools like Cursor, Claude Code, and VS Code with GitHub Copilot. When a secret is detected, ggshield can block the action before sensitive data is sent or exposed. You will also see how simple the setup is, with flexible install options for local or global use. This adds practical guardrails to AI-assisted development and helps teams move fast without increasing secret sprawl.

Enterprise AI Security Use Cases: What Security Teams Are Solving For

Enterprise AI adoption is no longer a future problem. The average organization uses 54 generative AI (genAI) applications, and endpoint AI agent adoption is accelerating, with Cyberhaven research tracking 276% growth in 2025. Security programs have struggled to keep pace with either trend. The AI security gap is technical, not philosophical. Most organizations have AI acceptable use policies.

Building Smarter Virtual Assistants with Gemini 3 Flash API: AI for Seamless Workflow Automation

As teams become more distributed and workloads continue to increase, the need for effective automation tools has never been greater. Traditional methods of collaboration often fall short when it comes to handling repetitive tasks, managing high volumes of information, or providing real-time, intelligent support. That's where AI virtual assistants come in, changing how teams collaborate, streamline workflows, and boost productivity.

The Agentic Identity Crisis: Why Your AI Agents Are Your Biggest Identity Blind Spot in 2026

An intern gets admin access to production for a temporary task, but nobody remembers to revoke it. Imagine that intern works at machine speed, never sleeps, and can chain dozens of actions before you’ve read the Slack ping—and has no instinct for when they’re about to do something irreversible.

Secure What Matters: Scaling Effortless Container Security for the AI Era

In November, we shared our vision for the Future of Snyk Container, outlining a fundamental shift in how teams secure the modern container lifecycle. We promised a future where security doesn’t just “scan” but scales effortlessly with the speed of the AI-driven, agentic world. Today, we are thrilled to announce that we are moving from vision to reality.

AI-Powered Human Risk Management Shifts the Focus to Adaptive, Behavior-Based Training

Human risk management (HRM) focuses on one of the most persistent cybersecurity vulnerabilities: humans. Social engineering attacks that trick users into taking risky actions are a factor in 98% of cyberattacks not because they are technically complex, but because they manipulate employee behavior. Unlike traditional, one-size-fits-all security awareness training, human risk management focuses on changing employee behavior through monitoring and targeted reinforcement.

Introducing the Datadog Code Security MCP

AI-assisted development helps teams write code faster, but that speed comes with added security risk. As agents generate more code, they can introduce vulnerabilities, insecure dependencies, or exposed secrets, often before a human reviewer ever sees the change. Security teams are left reviewing more code with the same resources, which makes it harder to catch issues early.

What is the NIST AI Risk Management Framework?

The NIST AI Risk Management Framework is a guide that helps organizations spot and reduce risks in AI systems. This framework was released in January 2023 by the U.S. National Institute of Standards and Technology. The framework is built around four key steps, namely: Govern, Map, Measure, and Manage, and is meant to help teams responsibly use AI. It doesn’t matter which industry you work in or which AI you use; this framework works everywhere.