Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Introducing Mend.io's AI Security Dashboard: A Clear View into AI Risk

Most dashboards are like a busy beach with one lifeguard watching the entire shoreline. They keep an eye on everything, but the sheer scope means that critical issues—like risks in AI applications—can get lost in the crowd. Mend.io’s AI Security Dashboard changes that. It’s like a lifeguard tower posted directly at the AI section of the beach, keeping a sharp, dedicated watch on AI specific risks that other tools overlook.

How to Spot and Stop Security Risks From Unmanaged AI Tools: Shadow AI, LLM Agents, Compliance Risks

How to Spot and Stop Security Risks From Unmanaged AI Tools Shadow AI is exploding in organizations—developers are using AI tools and models without approval, often embedding them into production systems. In this webinar, Mend.io EVP of Product Management Nir Stern explains the real risks behind unmanaged AI tools, why traditional AppSec can’t keep up, and eight practical steps to regain control.

AI Meets SAST - Reimagining the Future of Static Analysis | Webinar Mend.io

Join host Tony Morbin as he explores how AI is revolutionizing Static Application Security Testing (SAST) in this future-forward episode with Saoirse Hinksmon, Senior Product Marketing Manager at Mend.io, and Amir Shahmir, Senior Sales Engineer at Mend.io. This isn’t your average security webinar — it’s a deep dive into the convergence of AI and SAST, uncovering how GenAI is making static analysis faster, smarter, and more actionable for developers and AppSec teams alike.

AI Is Writing the Code - Can Security Keep Up? | How to Secure Agentic IDEs from Dev to CI/CD | Mend

AI coding agents are exploding in use—but are they quietly shipping exploitable code? In this webinar, we break down real data, real incidents, and a practical blueprint for securing AI-accelerated development.

NPM Ecosystem Under Siege: Self-Propagating Malware Compromises 187 Packages in a Huge Supply Chain Attack

The NPM ecosystem has been rocked by one of its widest supply chain attacks to date, with over 187 popular packages compromised by advanced malware capable of self-propagation and automated credential harvesting. This attack, affecting packages with millions of weekly downloads including angulartics2, ngx-toastr, and @ctrl/tinycolor, demonstrates how cybercriminals are evolving their tactics to create “worm-like” malware that can autonomously spread across the software supply chain.

What Being Customer Recognized in The Forrester Wave: Static Application Security Testing Solutions, Q3 2025 Really Means

Our customers have been telling us for months: “You’ve made security simple.” Today, Forrester confirmed what our customers already knew. Mend.io has been recognized as a Strong Performer in The Forrester Wave: Static Application Security Testing Solutions, Q3 2025. In our first appearance in the evaluation, we earned top scores in Innovation and Triage. But the recognition that matters most? Being highlighted as a customer favorite.

NPM Supply Chain Attack: Sophisticated Multi-Chain Cryptocurrency Drainer Infiltrates Popular Packages

The NPM ecosystem faced another significant supply chain attack when 18 popular packages, including highly-used libraries like debug and chalk, were compromised with advanced cryptocurrency drainer malware. This attack, affecting packages with over 2 billion weekly downloads, demonstrates how cybercriminals are leveraging trusted software distribution channels to deploy advanced Web3 wallet hijacking code.

Catch Bugs Early: Dynamic Scanning & the Cake Analogy Explained #cybersec

Mend.io, formerly known as Whitesource, has over a decade of experience helping global organizations build world-class AppSec programs that reduce risk and accelerate development -– using tools built into the technologies that software and security teams already love. Our automated technology protects organizations from supply chain and malicious package attacks, vulnerabilities in open source and custom code, and open-source license risks.

Data Rejection and API Best Practice #cybersecurity

Mend.io, formerly known as Whitesource, has over a decade of experience helping global organizations build world-class AppSec programs that reduce risk and accelerate development -– using tools built into the technologies that software and security teams already love. Our automated technology protects organizations from supply chain and malicious package attacks, vulnerabilities in open source and custom code, and open-source license risks.

Why AI Security Tools Are Different and 9 Tools to Know in 2025

As companies embed AI models into their applications, they face risks that traditional security tools weren’t designed to catch, such as prompt injection, data leakage, model poisoning, and shadow AI. Addressing these threats requires a new class of security tools built specifically for AI specific risk.

Multi-Tenant Systems: Sharing Vulnerabilities #appsec

Mend.io, formerly known as Whitesource, has over a decade of experience helping global organizations build world-class AppSec programs that reduce risk and accelerate development -– using tools built into the technologies that software and security teams already love. Our automated technology protects organizations from supply chain and malicious package attacks, vulnerabilities in open source and custom code, and open-source license risks.

Google Saved the Day: How Search Solved a Ransomware Alert #appsec

Mend.io, formerly known as Whitesource, has over a decade of experience helping global organizations build world-class AppSec programs that reduce risk and accelerate development -– using tools built into the technologies that software and security teams already love. Our automated technology protects organizations from supply chain and malicious package attacks, vulnerabilities in open source and custom code, and open-source license risks.

Understanding Bias in Generative AI: Types, Causes & Consequences

Bias in generative AI refers to the systematic errors or distortions in the information produced by generative AI models, which can lead to unfair or discriminatory outcomes. These models, trained on vast datasets from the internet, often inherit and amplify the biases present in the data, mirroring societal prejudices and inequities.