Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

What is OpenClaw andAgentic AI? The Security Issues You Need to Be Aware of Now

Over the past several weeks, OpenClaw and MaltBook have exploded across the headlines. Outlets have published stories about AI agents organizing themselves or even acting independently on Moldtbook. SecurityScorecard’s Jeremy Turner, VP of Threat Intelligence & Research and Anne Griffin, Head of AI Product Strategy discuss what OpenClaw is, how agentic AI works, and where the real security issues are based on new research from SecurityScorecard's STRIKE Threat Intelligence team.

Exposed OpenClaw Deployments are Turning Agentic AI Into an Attack Surface: What To Do Next

SecurityScorecard's STRIKE Threat Intelligence team has uncovered tens of thousands of exposed OpenClaw instances, many of which are vulnerable to Remote Code Execution (RCE). These exposed OpenClaw instances leave users and organizations open to attacks. OpenClaw and other agentic AI tools are designed to take actions on a user’s behalf, interact with infrastructure, and move across connected services. That functionality is the appeal. It is also the risk for users around the globe.

What Are Moltbot and Moltbook? Why the Agentic AI Frenzy Is a Security Trap

AI agents aren’t taking over. But agentic AI without security is a real problem. Over the last few days, Moltbot and its social platform Moltbook have surged across headlines and social media. Some are calling it a glimpse of artificial general intelligence. Others say AI agents are organizing themselves. That’s not what’s happening. In this video, SecurityScorecard’s Jeremy Turner, VP of Threat Intelligence & Research, breaks down what Moltbot actually is, why this isn’t AGI, and where the real danger lives.