Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Agentic AI and NonHuman Identities Demand a Paradigm Shift In Security: Lessons from NHIcon 2026

In the race to innovate, software has repeatedly reinvented how we define identity, trust, and access. In the 1990's, the web made every server a perimeter. In the 2010's, the cloud made every identity a workload. Here in 2026, agentic AI makes every action autonomous.

GitProtect 2.1.0 Overview : Jira Granular Backup and other new features

What Xopero ONE and GitPortect 2.1.0 release bring? Jira Granular Backup, backup & restore for Azure DevOps Artifacts, extended protection coverage for GitHub Projects by draft issues, and much more. Watch the video where we’ve broken down what’s new in our latest release and why it matters for DevOps and Jira Admins.

OpenClaw (Moltbot) Personal Assistant Goes Viral - And So Do Your Secrets

Early 2026, Moltbot a new AI personal assistant went viral. GitGuardian detected 200+ leaked secrets related to it, including from healthcare and fintech companies. Our contribution to Moltbot: a skill that turns secret scanning into a conversational prompt, letting users ask "is this safe?".

Planning Your Workload Identity Roadmap: Standards, Patterns, and the Path Ahead - Webinar

With 100x more non-human identities than human identities expected in 2025, the way we manage machine credentials is fundamentally broken. 83% of attacks involve compromised secrets, yet many organizations still rely on hardcoded keys, sprawling secrets, and scattered vault deployments.

Save Time With GitGuardian's ML-Powered Similar Incident Grouping

GitGuardian is excited to introduce Machine Learning Powered Similar Incident Grouping, which cuts through the noise by identifying incident-specific patterns across your inventory and clustering incidents that belong together, so you can handle repetitive cases efficiently and reduce incident response toil.

Meet GitGuardian's Machine Learning-Powered Risk Scoring

The GitGuardian Platform now automatically ranks every secrets incident with a risk score from 0–100, turning alert floods into a prioritized, trustworthy work queue. Scores are computed from incident context (like validity, exposure, where it was found, and exploitability) and build on existing ML capabilities like Secret Enricher and our False-Positive Remover, which cuts false positives by 80%+.