Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

LLM Security in 2025: Risks, Mitigations & What's Next

Large language model (LLM) security refers to the strategies and practices that protect the confidentiality, integrity, and availability of AI systems that use large language models. These models, such as OpenAI’s GPT series, are trained on vast datasets and can generate, translate, summarize, and analyze text. However, like any complex software component, LLMs present unique attack surfaces because they can be influenced by the data they process and the prompts they receive from users.

Top 7 SAST tools for DevSecOps Teams in 2025

SAST (Static Application Security Testing) tools are crucial for DevSecOps, enabling automated code analysis to identify vulnerabilities early in the development lifecycle. They analyze source code without execution, detecting issues like SQL injection, XSS, and buffer overflows. Popular SAST tools used by DevSecOps teams include Mend, Checkmarx, Snyk, Veracode, BlackDuck, SonarQube, and Semgrep. Integrating SAST into CI/CD pipelines ensures continuous security checks as code is developed.

42 DevOps Statistics to Know in 2025

DevOps is a set of practices that combines software development (Dev) and IT operations (Ops) to shorten the development lifecycle and deliver high-quality software continuously. It emphasizes collaboration, automation, and integration between previously siloed teams. DevOps aims to improve deployment frequency, reduce failure rates, and accelerate recovery times.

AI Code Review in 2025: Technologies, Challenges & Best Practices

AI code review leverages artificial intelligence models and machine learning techniques to analyze and provide feedback on source code, automating and improving the traditional code review process. It is crucial for software development workflows, offering significant advantages to developers and teams. AI code review can scan for bugs, style violations, security vulnerabilities, and other issues.

Introducing Mend.io's AI Security Dashboard: A Clear View into AI Risk

Most dashboards are like a busy beach with one lifeguard watching the entire shoreline. They keep an eye on everything, but the sheer scope means that critical issues—like risks in AI applications—can get lost in the crowd. Mend.io’s AI Security Dashboard changes that. It’s like a lifeguard tower posted directly at the AI section of the beach, keeping a sharp, dedicated watch on AI specific risks that other tools overlook.

NPM Ecosystem Under Siege: Self-Propagating Malware Compromises 187 Packages in a Huge Supply Chain Attack

The NPM ecosystem has been rocked by one of its widest supply chain attacks to date, with over 187 popular packages compromised by advanced malware capable of self-propagation and automated credential harvesting. This attack, affecting packages with millions of weekly downloads including angulartics2, ngx-toastr, and @ctrl/tinycolor, demonstrates how cybercriminals are evolving their tactics to create “worm-like” malware that can autonomously spread across the software supply chain.

What Being Customer Recognized in The Forrester Wave: Static Application Security Testing Solutions, Q3 2025 Really Means

Our customers have been telling us for months: “You’ve made security simple.” Today, Forrester confirmed what our customers already knew. Mend.io has been recognized as a Strong Performer in The Forrester Wave: Static Application Security Testing Solutions, Q3 2025. In our first appearance in the evaluation, we earned top scores in Innovation and Triage. But the recognition that matters most? Being highlighted as a customer favorite.

NPM Supply Chain Attack: Sophisticated Multi-Chain Cryptocurrency Drainer Infiltrates Popular Packages

The NPM ecosystem faced another significant supply chain attack when 18 popular packages, including highly-used libraries like debug and chalk, were compromised with advanced cryptocurrency drainer malware. This attack, affecting packages with over 2 billion weekly downloads, demonstrates how cybercriminals are leveraging trusted software distribution channels to deploy advanced Web3 wallet hijacking code.

Why AI Security Tools Are Different and 9 Tools to Know in 2025

As companies embed AI models into their applications, they face risks that traditional security tools weren’t designed to catch, such as prompt injection, data leakage, model poisoning, and shadow AI. Addressing these threats requires a new class of security tools built specifically for AI specific risk.

Understanding Bias in Generative AI: Types, Causes & Consequences

Bias in generative AI refers to the systematic errors or distortions in the information produced by generative AI models, which can lead to unfair or discriminatory outcomes. These models, trained on vast datasets from the internet, often inherit and amplify the biases present in the data, mirroring societal prejudices and inequities.