Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Five Ways to Leverage AI Safely and Responsibly

Artificial Intelligence (AI) is super-charging customer service, amplifying personalized product recommendations, and accelerating workflows that enable humans to focus on higher-value tasks. However, AI cannot deliver desired productivity improvements to financial organizations without foundational security protection in place. In this blog, I recap several best practices that empower financial institutions to leverage AI safely and responsibly.

The AI-Native Era is Here: What this Gartner Innovation Insight Means for Your Software Security

A new era of software engineering is emerging, with artificial intelligence (AI) at the forefront. As the 2025 Gartner Innovation Insight for AI-Native Software Engineering report states: “AI-native software engineering will require software engineering leaders to mitigate new risks and tackle new challenges.” Here are the key insights and perspectives that will help you navigate the new normal.

Shift Left AI Security #coding #cybersecurity

Mend.io, formerly known as Whitesource, has over a decade of experience helping global organizations build world-class AppSec programs that reduce risk and accelerate development -– using tools built into the technologies that software and security teams already love. Our automated technology protects organizations from supply chain and malicious package attacks, vulnerabilities in open source and custom code, and open-source license risks.

7 Proven Ways to Safeguard Personal Data in LLMs

Large Language Models (LLMs) are becoming integral to SaaS products for features like AI chatbots, support agents, and data analysis tools. With that comes a significant privacy risk: if not handled carefully, an LLM can ingest and remix sensitive personal data, potentially exposing private information in unexpected ways. Regulators have taken note – frameworks like GDPR, HIPAA, and PCI-DSS now expect AI systems to implement auditable, runtime controls to protect sensitive data.

Abusing supply chains: How poisoned models, data, and third-party libraries compromise AI systems

The AI ecosystem is rapidly changing, and with this growth comes unique challenges in securing the infrastructure and services that support it. In Part 1 of this series, we explored how attackers target the underlying resources that host and run AI applications, such as cloud infrastructure and storage. In this post, we'll look at threats that affect AI-specific resources in supply chains, which are the software and data artifacts that determine how an AI service operates.

Abusing AI interfaces: How prompt-level attacks exploit LLM applications

In Parts 1 and 2 of this series, we looked at how attackers get access to and take advantage of the infrastructure and supply chains that shape generative AI applications. In this post, we'll discuss AI interfaces, which we define as the entry points and logic that determine how a user interacts with an AI application. These elements can include chat interfaces, such as AI assistants, and API endpoints for supporting services.

Abusing AI infrastructure: How mismanaged credentials and resources expose LLM applications

The swift adoption of generative AI (GenAI) by the software industry has introduced a new area of focus for security engineers: threats targeting the various components of their AI applications. Understanding how these areas are vulnerable to attacks will become increasingly significant as the space evolves. In this series, we'll look at common threats that target the following components of AI applications.

Live at Black Hat: What's AI Really Capable Of?

"This year at Black Hat, the topic of AI was everywhere — from hallway chats to the expo floor. Adam and Cristian took a break from the action for a rare in-person conversation about how adversaries are weaponizing AI, how defenders are using agentic AI, and what we should all be thinking about as AI evolves as an offensive and defensive tool.