Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Why Confusing ChatGPT and LLMs as the Same Thing Creates Security Blind Spots

When news broke that the Head of CISA uploaded sensitive data to ChatGPT, the response was predictable: panic, headlines, and renewed questions about AI safety. But this incident reveals more about confusion than actual risk. The real issue? Most organizations don’t understand what they’re actually risking when they use AI tools. Let’s fix that.

How Cloud-Native Applications Defend Against DDoS Attacks

As organizations migrate critical applications to the cloud, cloud-based DDoS attacks and defenses have become a growing concern amid the increasing number of cyber threats. Unlike traditional threats, these attacks are increasingly targeted, sophisticated, and capable of disrupting services in ways that can impact entire business operations and business continuity.

AI Attacks, CaaS & the New Reality of Banking Security

This week, in the episode – Guardians of the Enterprise, Ashish Tandon, Founder & CEO, Indusface, speaks with Madhur Joshi, CISO at HDB Financial Services (part of the HDFC Group), on how large financial institutions are navigating a rapidly evolving cyber threat landscape. The conversation covers the rise of AI-driven attacks, Cybercrime-as-a-Service (CaaS), and the growing complexity that comes with expanding digital footprints across cloud, applications, and APIs.

From Shadow APIs to Shadow AI: How the API Threat Model Is Expanding Faster Than Most Defenses

The shadow technology problem is getting worse. Over the past few years, organizations have scaled microservices, cloud-native apps, and partner integrations faster than corporate governance models could keep up, resulting in undocumented or shadow APIs. We’re now seeing this pattern all over again with AI systems. And, even worse, AI introduces non-deterministic behavior, autonomous actions, and machine-to-machine decision-making. Put simply, shadow AI is much, much riskier than shadow APIs.

What is OpenClaw andAgentic AI? The Security Issues You Need to Be Aware of Now

Over the past several weeks, OpenClaw and MaltBook have exploded across the headlines. Outlets have published stories about AI agents organizing themselves or even acting independently on Moldtbook. SecurityScorecard’s Jeremy Turner, VP of Threat Intelligence & Research and Anne Griffin, Head of AI Product Strategy discuss what OpenClaw is, how agentic AI works, and where the real security issues are based on new research from SecurityScorecard's STRIKE Threat Intelligence team.

OpenClaw Security Checklist for CISOs: Securing the New Agent Attack Surface

OpenClaw exposes a fundamental misalignment between how traditional enterprise security is designed and how AI agents actually operate. As an AI agent assistant, OpenClaw operates with human permissions, executes actions autonomously, and processes untrusted content as input, all while sitting outside the visibility of conventional security tools.