Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Building continuous compliance with Aikido and Comp AI

Compliance evidence only works if it reflects the current state of the system. At Aikido, we’ve always treated compliance as a byproduct of good security, not a separate exercise teams need to prepare for. That’s why Aikido integrates with multiple compliance platforms. The goal is simple: let teams use the security data generated in Aikido wherever they run their compliance programs, without changing how they work or maintaining parallel processes.

Attackers exploited OpenClaw's popularity #cybersecurity #ai #podcast

In this week's Intel Chat, Chris Luft and Matt Bromiley discuss how a malicious VS Code extension impersonated OpenClaw (formerly ClawdBot) to distribute remote access malware to developers. Matt breaks down a critical pattern: whenever there's a stampede toward new technology, threat actors will find a way to inject a malicious version of it. The episode also covers PeckBirdie (a JScript-based C2 framework), Shiny Hunters' massive phishing campaign, and a Russian cyberattack on Poland's power grid.

280+ Leaky Skills: How OpenClaw & ClawHub Are Exposing API Keys and PII

On Monday, February 3rd, Snyk Staff Senior Engineer Luca Beurer-Kellner and Senior Incubation Engineer Hemang Sarkar uncovered a massive systemic vulnerability in the ClawHub ecosystem (clawhub.ai). Unlike the malware campaign we reported yesterday involving specific malicious actors, this new finding reveals a broader, perhaps more dangerous trend: widespread insecurity by design. In this write-up, Snyk is presenting Leaky Skills - uncovering exposed and insecure credentials usage in Agent Skills.

Agentic AI Security and Regulatory Readiness: A Security-First Framework

AI is getting smarter; instead of just waiting for us to tell it what to do, it's starting to jump in, make its own calls, and get whole jobs done by itself. These independent systems can mess with data, use tools, and talk to people in all sorts of places, often doing things way faster than we can keep an eye on. This means we need a new way to stay safe, one that's all about managing what these AIs do and making sure we can always see what's happening and know who's responsible.

6 Top AI Pentesting Platforms in 2026

AI penetration testing has moved beyond experimentation and into operational reality. What started as automation layered on top of traditional scanners has evolved into platforms capable of simulating attacker behavior, validating exploit paths, and continuously reassessing exposure as environments change.

Four Reasons Why Your Business Needs to Keep Its Software Updated

Have you ever told yourself that software updates are optional? That little reminder pops up, you ignore it, and you get on with your day. Nothing breaks immediately, so you assume everything's fine. But the hard truth is that outdated software doesn't usually fail in dramatic ways. It fails slowly. Small glitches. Weird delays. Tiny problems that pile up until one day you're dealing with a mess that could've been avoided. And in some cases, it could be the silent problems, such as cybersecurity exploits due to outdated software.

MomentProof Deploys Patented Digital Asset Protection

MomentProof, Inc., a provider of AI-resilient digital asset certification and verification technology, today announced the successful deployment of MomentProof Enterprise for AXA, enabling cryptographically authentic, tamper-proof digital assets for insurance claims processing. MomentProof's patented technology certifies images, video, voice recordings, and associated metadata at the moment of capture, ensuring claims evidence is protected against AI-based manipulation, deepfakes, and other malicious digital alterations.

AI Principles in Practice: Auditability in non-negotiable

When AI acts on your behalf, auditability is non-negotiable. In the latest Principles in Practice video, Anand Srinivas, 1Password VP of Product & AI, explains why every AI agent action involving credentials must leave a clear audit trail: Who approved the access and why When and where were credentials used What did the agent access and when Did access end when the task was completed Without auditability, AI usage can break trust between employees, security teams, customers, and regulators.