Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

1 in 15 MCP Servers are Lookalikes: Is Your Org at Risk?

Researchers recently analyzed 18,000 Claude Code configuration files pulled from public GitHub repositories. What they found was straightforward and alarming: developers are already installing mistyped, misconfigured, and near-identical MCP server names — often without realizing it. The human-error condition that makes typosquatting work was already present at scale before any attacker needed to exploit it.

MCP: The AI Protocol Quietly Expanding Your Attack Surface

In February 2026, researchers uncovered something that should give every security leader pause. A malware operation called SmartLoader, previously known for targeting consumers who downloaded pirated software, had completely pivoted its infrastructure. SmartLoaders new target was developers, and its new entry point was a protocol most security teams had never heard of. The payload delivered to victims: every saved browser password, every cloud session token, every SSH key on the machine.

Mythos, Attackers, and The Part People Still Want To Skip

Anthropic built a powerful AI model and then kept it on a short leash. The important part is not that a model found bugs, which has been coming for a while. What’s worth acknowledging is that Anthropic looked at what Mythos could do and decided broad release was a bad idea. Attackers do not need a perfect autonomous system. They need leverage.

What Real AI Security Incidents Reveal About Today's Risks

Mend.io, formerly known as Whitesource, has over a decade of experience helping global organizations build world-class AppSec programs that reduce risk and accelerate development -– using tools built into the technologies that software and security teams already love. Our automated technology protects organizations from supply chain and malicious package attacks, vulnerabilities in open source and custom code, and open-source license risks.

Ep. 56 - 10,000 Bugs, 12 That Matter: Using AI to Cut Through Exposure Noise with CTEM

Are you still stuck on the vulnerability hamster wheel? In this episode of the Cyber Resilience Brief, host Tova Dvorin is joined by SafeBreach VP of Product Koby Bar and offensive security expert Adrian Culley to unpack a major shift in how enterprises approach proactive security — and to announce the launch of SafeBreach Helm, the AI validation layer built for Continuous Threat Exposure Management (CTEM).

Falcon Exposure Management AI Inventory: Demo Drill Down

AI adoption is accelerating across the enterprise, but governance isn’t keeping pace—leaving security teams without a clear view of what AI is running, how it’s being used, and where it introduces exposure. In this Demo Drill Down, we showcase AI Inventory in Falcon Exposure Management, delivering a centralized view of AI across hosts—from local LLMs and MCP servers to IDE extensions, packages, and applications.

Nine Seconds to Delete a Database: What the PocketOS Incident Teaches Us About AI Agent Privilege Management

There’s never a good time to lose a production database, but losing one to your own AI coding agent on a Friday afternoon has to rank near the bottom of the list. That’s the backdrop to the PocketOS incident, and it’s the clearest case yet for why AI agent security and intent-based access control belong at the top of every cloud security roadmap this year.

How Zero Standing Privileges Defuses the Shadow AI Agent Problem

As more organizations move past experimentation and start planning real AI agent deployments, the same set of concerns keeps surfacing in our conversations with security teams. Whether the worry is a shadow agent that shows up uninvited or a sanctioned agent going rogue, the questions tend to cluster around control: These are the right questions to be asking, and they share a common answer that’s more concrete than most people expect. AI agents are only as dangerous as the privileges they can reach.

The Metric AI Security is Missing

As autonomous and semi-autonomous AI systems take on more responsibility within the enterprise, they shift from being “features” of software to becoming true internal actors. They make decisions, take actions, call tools, orchestrate workflows, and influence other AI agents. With this evolution, we must confront an uncomfortable truth: the metrics and response patterns we built for deterministic software no longer work.