Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

59 Generative AI Statistics to Know in 2025

Over the past few years, generative AI has moved from research labs into mainstream industries, reshaping how people interact with technology. Advances in deep learning, especially transformer models, allowed systems like ChatGPT and Stable Diffusion to generate human-like text and realistic images. These breakthroughs sparked widespread interest because they showed that AI could create content, making it useful in writing, design, and coding.

SBOM Security: 6 Key Components and Top 3 Use Cases

An SBOM (Software Bill of Materials) is a structured list of components, including third-party and open-source software, that make up a software application. It’s a detailed inventory of everything that goes into a software product, similar to a list of ingredients for food. SBOMs are crucial for improving software security by providing transparency and enabling organizations to identify and address potential vulnerabilities and risks within their software supply chains.

What is an AI Bill of Materials (AI BOM)?

What’s happening under the hood of your AI systems? AI is now a crucial element of modern software applications, and if you don’t have visibility into its components, you’ll be left blind. Similar to a Software Bill of Materials (SBOM), an AI Bill of Materials, AI BOM, or AIBOM has become a crucial framework for documenting and securing this new and complex supply chain. This article is part of a series of articles on Shadow AI.

The Hallucinated Package Attack: Slopsquatting

Imagine a world where, in the middle of programming, your helpful AI assistant tells you to import a package called securehashlib. It sounds real. It looks real. You trust your silicon co-pilot. You run pip install securehashlib. Congratulations. You’ve just opened a backdoor into your software stack—and possibly your company’s infrastructure. The package didn’t exist until yesterday, when an attacker registered it based on a hallucination the AI made last week.

The Complete Guide to SBOM Software Bill of Materials

A Software Bill of Materials (SBOM) is like an ingredient list for software. It provides a detailed inventory of all the components that make up an application, including open source libraries, proprietary code, packages, and containers. Just as food packaging lists ingredients to protect consumers and ensure safety, SBOMs do the same for software by giving visibility into what is inside.

Introducing Mend Forge

Today, we’re thrilled to announce Mend Forge, our new AI native innovation engine and your window into what’s next in application security. At Mend.io, we believe that security innovation shouldn’t happen in a black box. The security landscape is shifting fast, driven by the explosive growth of AI generated code, AI powered applications, and rapidly evolving software supply chains.

What is AI system prompt hardening?

As generative AI tools like ChatGPT, Claude, and others become increasingly integrated into enterprise workflows, a new security imperative has emerged: system prompt hardening. A system prompt is a set of instructions given to an AI model that defines its role, behavior, tone, and constraints for a session. It sets the foundation for how the model responds to user input and remains active throughout the conversation.

Deploying Gen AI Guardrails for Compliance, Security and Trust

AI guardrails are structured safeguards, whether technical, security or ethical, which are designed to guide AI systems so they operate safely, responsibly, and within intended boundaries. Much like highway guardrails that prevent vehicles from veering off course, these measures ensure AI remains aligned with organizational policies, regulations, and ethical values.

Best AI Red Teaming Tools: Top 7 Solutions in 2025

There was a time when “AI red teaming” sounded like a novelty. Now, it’s fast becoming table stakes. If your organization is shipping machine learning or LLM-powered systems into the real world (especially in sensitive domains), you need to know how those systems behave under pressure. That’s where AI red teaming tools come in. These tools help teams stress-test AI the way it will actually be used (and misused).