Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Moltworker (for OpenClaw) & Markdown for Agents: Running AI on Cloudflare

Celso explains how Markdown for Agents was conceived, built, and shipped in just one week, why AI systems prefer markdown over HTML, and how converting a typical blog post from 16,000 HTML tokens to roughly 3,000 markdown tokens can reduce cost, improve speed, and increase accuracy for AI models. We also explore Moltworker, a proof-of-concept showing how a personal AI agent originally designed to run on a Mac Mini can instead run on Cloudflare’s global network using Workers, R2, Browser Rendering, AI Gateway, and Zero Trust.

OpenClaw Security Checklist for CISOs: Securing the New Agent Attack Surface

OpenClaw exposes a fundamental misalignment between how traditional enterprise security is designed and how AI agents actually operate. As an AI agent assistant, OpenClaw operates with human permissions, executes actions autonomously, and processes untrusted content as input, all while sitting outside the visibility of conventional security tools.

What is OpenClaw andAgentic AI? The Security Issues You Need to Be Aware of Now

Over the past several weeks, OpenClaw and MaltBook have exploded across the headlines. Outlets have published stories about AI agents organizing themselves or even acting independently on Moldtbook. SecurityScorecard’s Jeremy Turner, VP of Threat Intelligence & Research and Anne Griffin, Head of AI Product Strategy discuss what OpenClaw is, how agentic AI works, and where the real security issues are based on new research from SecurityScorecard's STRIKE Threat Intelligence team.

From Shadow APIs to Shadow AI: How the API Threat Model Is Expanding Faster Than Most Defenses

The shadow technology problem is getting worse. Over the past few years, organizations have scaled microservices, cloud-native apps, and partner integrations faster than corporate governance models could keep up, resulting in undocumented or shadow APIs. We’re now seeing this pattern all over again with AI systems. And, even worse, AI introduces non-deterministic behavior, autonomous actions, and machine-to-machine decision-making. Put simply, shadow AI is much, much riskier than shadow APIs.

AI Attacks, CaaS & the New Reality of Banking Security

This week, in the episode – Guardians of the Enterprise, Ashish Tandon, Founder & CEO, Indusface, speaks with Madhur Joshi, CISO at HDB Financial Services (part of the HDFC Group), on how large financial institutions are navigating a rapidly evolving cyber threat landscape. The conversation covers the rise of AI-driven attacks, Cybercrime-as-a-Service (CaaS), and the growing complexity that comes with expanding digital footprints across cloud, applications, and APIs.

Why Confusing ChatGPT and LLMs as the Same Thing Creates Security Blind Spots

When news broke that the Head of CISA uploaded sensitive data to ChatGPT, the response was predictable: panic, headlines, and renewed questions about AI safety. But this incident reveals more about confusion than actual risk. The real issue? Most organizations don’t understand what they’re actually risking when they use AI tools. Let’s fix that.

How miniOrange's GPT App Connects LLMs to Your WordPress Site

WordPress is entering a new phase in how websites are managed with the introduction of API Abilities and support for the Model Context Protocol (MCP). These updates allow WordPress core, plugins, and themes to clearly define the actions they support and how those actions should be executed. For the first time, WordPress can communicate its capabilities in a structured way that large language models can reliably understand.