Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Poisoned Axios: npm Account Takeover, 50 Million Downloads, and a RAT That Vanishes After Install

On March 30-31, 2026, threat actors published two malicious versions of the popular HTTP library axios (versions 1.14.1 and 0.30.4) to the npm registry. Both versions included a new dependency named plain-crypto-js which, in its 4.2.1 release, contained a fully-featured cross-platform dropper that silently installed a Remote Access Trojan (RAT) on developer machines.

Famous Telnyx Pypi Package compromised by TeamPCP

Part 1 covered CanisterWorm, the self-spreading npm worm. Part 2 covered the malicious LiteLLM package and its.pth persistence. This post covers the third wave: a compromised telnyxPyPI package that hides its payload inside audio files and delivers entirely different malware depending on the victim’s operating system.

Understanding Malicious Packages in Modern Software Supply Chains

Mend.io, formerly known as Whitesource, has over a decade of experience helping global organizations build world-class AppSec programs that reduce risk and accelerate development -– using tools built into the technologies that software and security teams already love. Our automated technology protects organizations from supply chain and malicious package attacks, vulnerabilities in open source and custom code, and open-source license risks.

TeamPCP Supply Chain Attack Part 2: LiteLLM PyPI Credential Stealer

Part 1 covered CanisterWorm, the self-spreading npm worm. This post covers the next wave: a malicious LiteLLM PyPI package carrying the most capable credential stealer TeamPCP has deployed yet. On March 24, 2026, two versions of litellm, one of the most widely used Python libraries for working with AI language model APIs, were published to PyPI carrying a hidden credential stealer. Versions 1.82.7 and 1.82.8 never appeared on the official LiteLLM GitHub repository.

CanisterWorm: The Self-Spreading npm Attack That Uses a Decentralized Server to Stay Alive

On March 20, 2026 at 20:45 UTC, Aikido Security detected an unusual pattern across the npm registry: dozens of packages from multiple organizations were receiving unauthorized patch updates, all containing the same hidden malicious code. What they had caught was CanisterWorm, a self-spreading npm worm deployed by the threat actor group TeamPCP. We track this incident as MSC-2026-3271.

Moonshot AI governance breakdown: Lessons from the Cursor/Kimi K2.5 incident

What happens when a $29 billion company forgets to rename a model ID, and what it means for every organization using open-source AI. On March 19, 2025, Cursor, the AI-powered coding tool valued at $29 billion and generating an estimated $2 billion in annual recurring revenue, launched Composer 2, its newest and most powerful coding model.

Introducing AI-powered Contextual Project Classification: From severity scores to business risk

Today, Mend.io is launching Contextual Project Classification, an AI-native feature that automatically analyzes your codebase to identify which applications handle sensitive data like payments, healthcare records, and PII, enabling true risk-based security prioritization.

Introducing System Prompt Hardening: production-ready protection for system prompts

Today, we’re launching System Prompt Hardening, Mend.io’s new capability that defends the hidden instructions that control how your AI systems behave. Unlike user-facing prompts, system prompts live behind the scenes, and when attackers manipulate them, the result can be data leaks, policy bypasses, or unsafe model behavior. System prompt hardening stops those attacks at the source and gives security, engineering, and risk teams a practical, auditable way to secure AI in production.

AI Compliance: 5 Key Frameworks, Challenges, and Best Practices

AI compliance ensures AI systems follow laws, ethics, and standards by managing risks like bias, privacy violations, and lack of transparency through robust governance, documentation, and continuous monitoring, using frameworks like the EU AI Act and NIST AI Risk Management Framework (RMF) to build trust and avoid penalties in developing, deploying, and operating AI.