Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The Attackers Lens The Hidden Path To Largescale LLM Exploits

Mend.io, formerly known as Whitesource, has over a decade of experience helping global organizations build world-class AppSec programs that reduce risk and accelerate development -– using tools built into the technologies that software and security teams already love. Our automated technology protects organizations from supply chain and malicious package attacks, vulnerabilities in open source and custom code, and open-source license risks.

Securing the New Control Plane: Introducing Static Scanning for AI Agent Configurations

Today, Mend.io is proud to announce the launch of AI Agent Configuration Scanning, integrated directly into the Mend AI Scanner. By treating “Agents as Code,” we are bringing security visibility and CI-friendly enforcement to AI configurations before they reach production The rapid adoption of AI agents has transformed the modern developer workflow.

OpenClaw as a Security Threat - The 443 Podcast - Episode 358

This week on the podcast, we discuss OpenClaw, the open source chatbot that has exploded in popularity since launching late last year, and some of the risk it introduces to organizations. Before that, we chat about Ring's Super Bowl advertisement that caused a stir before ending with a Google Threat Intelligence Group report on advanced threat actor AI usage.

The Real Risks of Agentic AI in the Enterprise with Camille Stewart-Gloster

In this episode of Data Security Decoded, host Caleb Tolin is joined by Camille Stewart-Gloster, CEO of CAS Strategies and former Deputy National Cyber Director, to unpack how AI is redefining cyber risk at every layer of the organization. Camille explains why identity-based attacks are so effective and how non-human identities (from APIs to AI agents) are quietly expanding the attack surface. She emphasized how critical MFA is for organizations to enable as they scale up AI operations, and why conditional access and governance must be foundational, not optional.

Why Your AI Agents Aren't Enterprise Ready #ai #shorts

Stop building AI agents that CISOs will never approve. If your agents are stuck in the POC (Proof of Concept) stage, it’s likely because they lack a "Passport" and a governance framework. In this clip, Arjun Subedi breaks down why "how well it works" isn't the biggest question in AI anymore—it's "how can I govern it?" Discover how mapping AGENTIC attacks to the MITRE ATT&CK framework through SafeMCP is the missing link to enterprise-level deployment.

Why reducing AI risk starts with treating agents as identities

As AI systems are used in our day-to-day operations, a central reality becomes unavoidable: AI doesn’t configure itself and must be set up with human approval and oversight. It requires engineers and developers to configure it. Developers need privileges to access and implement components, agents, tools, and features of the platforms. But developers don’t just have these privileges unconstrained… right? Where trust and privileges exist, someone will try to abuse them.

Is AI dangerous?

AI is everywhere—writing emails, creating videos, even cloning voices. But artificial intelligence also comes with real risks, including privacy concerns, deepfakes, and smarter online scams. Artificial intelligence learns by spotting patterns in massive amounts of data—and that power can be misused. AI tools may collect personal information, create realistic fake content, or help scammers craft messages that look completely legit.

Can You Trust AI Code? I Built a Scanner to Find Out

Can you trust the code AI generates? In this video, we build a custom AI Security Benchmarking tool to put models like Gemini, Mistral, and GLM 4.5 to the test. Using Windsurf, OpenRouter, and Snyk, we automate a pipeline that prompts multiple LLMs to write an application, then immediately scans the output for security vulnerabilities.