Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

TeamPCP Supply Chain Attack Part 2: LiteLLM PyPI Credential Stealer

Part 1 covered CanisterWorm, the self-spreading npm worm. This post covers the next wave: a malicious LiteLLM PyPI package carrying the most capable credential stealer TeamPCP has deployed yet. On March 24, 2026, two versions of litellm, one of the most widely used Python libraries for working with AI language model APIs, were published to PyPI carrying a hidden credential stealer. Versions 1.82.7 and 1.82.8 never appeared on the official LiteLLM GitHub repository.

CanisterWorm: The Self-Spreading npm Attack That Uses a Decentralized Server to Stay Alive

On March 20, 2026 at 20:45 UTC, Aikido Security detected an unusual pattern across the npm registry: dozens of packages from multiple organizations were receiving unauthorized patch updates, all containing the same hidden malicious code. What they had caught was CanisterWorm, a self-spreading npm worm deployed by the threat actor group TeamPCP. We track this incident as MSC-2026-3271.

Moonshot AI governance breakdown: Lessons from the Cursor/Kimi K2.5 incident

What happens when a $29 billion company forgets to rename a model ID, and what it means for every organization using open-source AI. On March 19, 2025, Cursor, the AI-powered coding tool valued at $29 billion and generating an estimated $2 billion in annual recurring revenue, launched Composer 2, its newest and most powerful coding model.

Introducing AI-powered Contextual Project Classification: From severity scores to business risk

Today, Mend.io is launching Contextual Project Classification, an AI-native feature that automatically analyzes your codebase to identify which applications handle sensitive data like payments, healthcare records, and PII, enabling true risk-based security prioritization.

Introducing System Prompt Hardening: production-ready protection for system prompts

Today, we’re launching System Prompt Hardening, Mend.io’s new capability that defends the hidden instructions that control how your AI systems behave. Unlike user-facing prompts, system prompts live behind the scenes, and when attackers manipulate them, the result can be data leaks, policy bypasses, or unsafe model behavior. System prompt hardening stops those attacks at the source and gives security, engineering, and risk teams a practical, auditable way to secure AI in production.

AI Compliance: 5 Key Frameworks, Challenges, and Best Practices

AI compliance ensures AI systems follow laws, ethics, and standards by managing risks like bias, privacy violations, and lack of transparency through robust governance, documentation, and continuous monitoring, using frameworks like the EU AI Act and NIST AI Risk Management Framework (RMF) to build trust and avoid penalties in developing, deploying, and operating AI.

AI Risk Management: Process, Frameworks, and 5 Mitigation Methods

AI risk management is the process of identifying, assessing, and mitigating risks associated with artificial intelligence systems to ensure they are developed and used responsibly. It involves using frameworks like the NIST AI Risk Management Framework to address technical, ethical, and social challenges, including data bias, privacy violations, and security vulnerabilities.

Why Claude Code Security Is a Big Moment for Application Security

Anthropic’s launch of Claude Code Security is exciting. Not because it changes everything overnight — but because it confirms something important: AI-powered security inside the developer workflow is becoming the new normal. And that’s a win for the entire industry.

Best Software Composition Analysis Providers: Top 5 in 2026

Major software composition analysis (SCA) providers include Mend, Black Duck (Synopsys), and Veracode. They offer solutions to find, manage, and fix vulnerabilities and license issues in open-source components, with options ranging from developer-focused tools to enterprise-grade platforms with SBOM generation and deep compliance features.

Securing the New Control Plane: Introducing Static Scanning for AI Agent Configurations

Today, Mend.io is proud to announce the launch of AI Agent Configuration Scanning, integrated directly into the Mend AI Scanner. By treating “Agents as Code,” we are bringing security visibility and CI-friendly enforcement to AI configurations before they reach production The rapid adoption of AI agents has transformed the modern developer workflow.