Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Where Cato Sits in the AI Economy

Every major technological shift reshapes the landscape, creating both winners and losers. AI will be no different. The key question is which companies are positioned to capture the value it generates, and which ones may fall behind as it unfolds. If you look at previous technology shifts, the winners were not always the companies building the most visible products. They were often the ones that enabled the shift to happen in the first place, or those that benefited from the structural changes it created.

OpenClaw Needs Real Security Controls; We Built Them Open Source

AI agent adoption and development are evolving quickly. The tooling used to build agents is improving fast, but the security controls around those agents are often rigid, opaque, or difficult to adapt to real environments. As more teams experiment with OpenClaw, one challenge becomes clear: developers need ways to inspect what agents are doing, evaluate risky behavior, and intervene when necessary.

The Shift to Continuous Context and the Rise of Guardian Agents

AI agent risk doesn’t emerge in a single moment. It develops over time across configuration changes, runtime behavior, long-horizon tasks, and interactions between agents, users, and enterprise systems. Their behavior and exposure can shift in real time as agents rewrite instructions, update memory, and dynamically alter execution.

BewAIre: Detecting Malicious Pull Requests at Scale with LLMs

As AI coding assistants accelerate software development, the volume of pull requests at Datadog has grown to nearly 10,000 per week, increasing the risk that malicious changes slip through due to review fatigue. To address this, Datadog built BewAIre, an LLM-powered code review system designed to identify malicious source code changes introduced by threat actors. By reducing approval fatigue for developers while increasing friction for attackers, BewAIre guides human reviewers to the areas where judgment matters most, without slowing developer velocity.

Homomorphic Encryption in LLM Pipelines: Why It Fails in 2026

There’s a claim gaining traction in the market: homomorphic encryption can preserve data privacy in AI workflows. Encrypt your data, run it through a language model, and never expose a single token. Sounds bulletproof. It isn’t. Homomorphic encryption (HE) was built for math, not language. Applying it to LLM pipelines is like encrypting a book and asking someone to summarize it without reading a word. The problem isn’t efficiency.

Moonshot AI governance breakdown: Lessons from the Cursor/Kimi K2.5 incident

What happens when a $29 billion company forgets to rename a model ID, and what it means for every organization using open-source AI. On March 19, 2025, Cursor, the AI-powered coding tool valued at $29 billion and generating an estimated $2 billion in annual recurring revenue, launched Composer 2, its newest and most powerful coding model.

Why NER models fail at PII detection in LLM workflows - 7 critical gaps

In AI systems, PII detection is the first step. Not the most glamorous step. But the one that, when it fails, takes everything else down with it. Identifying sensitive data (names, Social Security numbers, financial records, health information) has to happen before any of it reaches an LLM. Get this wrong, and you’re looking at one of two bad outcomes: Traditional DLP systems could afford to be aggressive with detection. LLMs can’t. They depend on full context to generate correct outputs.