Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Live at Black Hat: What's AI Really Capable Of?

"This year at Black Hat, the topic of AI was everywhere — from hallway chats to the expo floor. Adam and Cristian took a break from the action for a rare in-person conversation about how adversaries are weaponizing AI, how defenders are using agentic AI, and what we should all be thinking about as AI evolves as an offensive and defensive tool.

Abusing AI infrastructure: How mismanaged credentials and resources expose LLM applications

The swift adoption of generative AI (GenAI) by the software industry has introduced a new area of focus for security engineers: threats targeting the various components of their AI applications. Understanding how these areas are vulnerable to attacks will become increasingly significant as the space evolves. In this series, we'll look at common threats that target the following components of AI applications.

Abusing AI interfaces: How prompt-level attacks exploit LLM applications

In Parts 1 and 2 of this series, we looked at how attackers get access to and take advantage of the infrastructure and supply chains that shape generative AI applications. In this post, we'll discuss AI interfaces, which we define as the entry points and logic that determine how a user interacts with an AI application. These elements can include chat interfaces, such as AI assistants, and API endpoints for supporting services.

Abusing supply chains: How poisoned models, data, and third-party libraries compromise AI systems

The AI ecosystem is rapidly changing, and with this growth comes unique challenges in securing the infrastructure and services that support it. In Part 1 of this series, we explored how attackers target the underlying resources that host and run AI applications, such as cloud infrastructure and storage. In this post, we'll look at threats that affect AI-specific resources in supply chains, which are the software and data artifacts that determine how an AI service operates.

Cybersecurity Frontlines Now Require Organizations to Address APIs as a Matter of Urgency

APIs operate throughout the digital world to support mobile applications, enable cloud capabilities, power GenAI tools, and conduct invisible operations during every digital interaction. As the growth of API usage accelerates, Akamai’s 2024 API Security Impact Report shows that organizations find it difficult to align their security efforts with the expanding risk domain.

7 Proven Ways to Safeguard Personal Data in LLMs

Large Language Models (LLMs) are becoming integral to SaaS products for features like AI chatbots, support agents, and data analysis tools. With that comes a significant privacy risk: if not handled carefully, an LLM can ingest and remix sensitive personal data, potentially exposing private information in unexpected ways. Regulators have taken note – frameworks like GDPR, HIPAA, and PCI-DSS now expect AI systems to implement auditable, runtime controls to protect sensitive data.

How external attackers and malicious insiders exploit standing privileges in the cloud

For many of us, the term “cloud security breach” conjures meticulous attack plans executed by sophisticated criminal syndicates. But in reality, “attacks” can be far more mundane: maybe some forgotten credentials, a few default permissions, or a user whose cleanup to-do list never got done. At the center of these incidents are standing privileges: long-lived access rights originally granted for legitimate tasks.

The Rules Have Changed AI vs AI #aisecurity #ai

Mend.io, formerly known as Whitesource, has over a decade of experience helping global organizations build world-class AppSec programs that reduce risk and accelerate development -– using tools built into the technologies that software and security teams already love. Our automated technology protects organizations from supply chain and malicious package attacks, vulnerabilities in open source and custom code, and open-source license risks.