Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

OpenClaw (Moltbot) Personal Assistant Goes Viral - And So Do Your Secrets

Early 2026, Moltbot a new AI personal assistant went viral. GitGuardian detected 200+ leaked secrets related to it, including from healthcare and fintech companies. Our contribution to Moltbot: a skill that turns secret scanning into a conversational prompt, letting users ask "is this safe?".

Threat hunting to detection engineering: Analyzing real malware with Claude Code, LimaCharlie, and Linux

Claude Code, originally just auto-complete on steroids for IDEs, shows a lot of promise for becoming a major tool in the DFIR/detection engineering/security analyst’s toolbox. Whether it’s Claude Code’s support of MCP, agent skills, or general ability to quickly figure out how to accomplish a given task, it is rapidly becoming more than a code generation tool. This is the first of a three-part series.

Vibe Coding Speeds Up Mobile Apps But Creates New Security Risks

AI-assisted development has crossed a tipping point. Mobile teams are no longer debating whether to use AI to write code. They are deciding how fast they can ship with it. This shift, often called vibe coding, prioritizes intent and speed over manual implementation. Developers describe what they want, and AI fills in the rest. Velocity improves. Releases accelerate. But security assumptions quietly break. For mobile applications, that risk compounds.

Episode 7 - Practical AI for Zeek, MITRE, and Security Docs

In Episode 7 of Corelight DefeNDRs, join me, Richard Bejtlich, as I sit down with Dr. Keith Jones, Corelight's principal security researcher, to discuss the practical applications of AI in enhancing network security. We delve into how large language models (LLMs) can assist in cleaning up documentation and generating Zeek scripts, sharing insights from our extensive experience in incident response and coding. Keith reveals the challenges and successes he has encountered using LLMs to streamline processes, including their role in analyzing MITRE techniques.

Introducing Moltworker: a self-hosted personal AI agent, minus the minis

The Internet woke up this week to a flood of people buying Mac minis to run Moltbot (formerly Clawdbot), an open-source, self-hosted AI agent designed to act as a personal assistant. Moltbot runs in the background on a user's own hardware, has a sizable and growing list of integrations for chat applications, AI models, and other popular tools, and can be controlled remotely. Moltbot can help you with your finances, social media, organize your day — all through your favorite messaging app.

Measuring Agentic AI Posture: A New Metric for CISOs

In cybersecurity, we live by our metrics. We measure Mean Time to Respond (MTTR), Dwell Time, and Patch Cadence. These numbers indicate to the Board how quickly we respond when issues arise. But in the era of Agentic AI, reaction speed is no longer enough. When an AI Agent or an MCP server is compromised, data exfiltration happens in milliseconds rather than days. If you are waiting for an incident to measure your success, you have already lost.

Beyond Pattern Matching: How AI-Native File Classification Solves Modern DLP Challenges

Legacy DLP operates on a fundamental constraint: it identifies sensitive data by matching patterns. Credit card numbers follow the Luhn algorithm. Social Security numbers conform to a nine-digit format. API keys match specific string patterns. This approach works for structured data, but it fails to address a critical reality: Your most sensitive assets aren't numbers. They're documents.

Introducing Forward AI

As enterprises move toward agentic operations, speed without data accuracy becomes a liability. At Forward Networks, we recognized this challenge and set out to deliver a solution: speed backed by mathematical accuracy. In networking, acting on incomplete or approximate data is not an inconvenience, it is a cause of outages, security exposure, and operational risk.

Stop Staring at JSON: How GenAI is Solving the API "Context Crisis"

There is a moment that happens in every SOC (Security Operations Center) every day. An alert fires. An analyst looks at a dashboard and sees a UR: POST /vs/payments/proc/77a. And then they stop. They stare. And they ask the question that kills productivity: "What does this thing actually do?" Is it a critical payment gateway? A test function? Does it handle credit card numbers or just transaction IDs?