Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How to Build AI Agents That Don't Break: Design, Risk & Defense Explained #aiagents #AISecurity

Agentic AI is evolving fast — but building agents that are *both* effective and secure is still a major gap for most teams. In this webinar, Mend.io’s Bar-El Tayouri and AI21 Labs’ Yehoshua “Shuki” Cohen share a practical, deeply technical walkthrough of what it really takes to design and defend AI agents. You’ll learn: This is a tactical, no-fluff guide for anyone building AI agents in production engineers, security leaders, and innovators shaping the next wave of AI systems.

Best SAST tools: Top 10 solutions in 2025

SAST (Static Application Security Testing) tools analyze an application’s source code to identify potential security vulnerabilities without executing the code. They are crucial for finding security flaws early in the development lifecycle, helping developers address issues before they become more costly and difficult to fix. Unlike dynamic analysis techniques, SAST operates without executing the program, focusing entirely on the static codebase.

AppSec metrics fail, Mend.io's Risk Reduction Dashboard fixes it

Today, we’re introducing our Risk Reduction Dashboard. This is a new way for security leaders to quantify their AppSec program’s impact, prioritize high-value fixes, and prove ROI with data-backed insights that go beyond raw vulnerability counts.

Secure Your App with Mend.io's AI-Native AppSec Platform (featuring ByteGrad)

This video, originally created by Wesley from ByteGrad, walks through how to secure your applications using Mend.io’s AI-Native AppSec Platform — including SAST, SCA, and SBOM scanning. Wesley explores how Mend integrates with GitHub, automates code fixes, and helps developers stay ahead of vulnerabilities. Creator: ByteGrad YouTube Channel Timestamps.

If AI Security were food...What's on the menu? #aisecurity #food

How do you explain AI Security without the jargon? Easy you make it food. In this video, we asked leading AI Security professionals to describe AI Security as a dish. Their answers turn complex ideas like prompt injection, data leaks, and model hardening into bite-sized insights you’ll actually remember. From layered lasagna to spicy tacos, each response brings a fresh perspective on what it means to build and protect secure AI systems.

Building a more secure npm ecosystem with Mend Renovate

Over this last year, we’ve seen significant attacks like the Shai-Hulud worm, the Nx build system compromise, and secrets being leaked to public GitHub Actions logs via the tj-actions/changed-files compromise, but I could spend the entirety of this article only listing different attacks, let alone talking about them.

Direct vs. Indirect AI Risks: What Security Teams Need to Know #AIsecurity #AppSec #AInative

AI coding assistants don’t just speed up development — they introduce two kinds of risks you can’t afford to ignore. Direct risks: vulnerabilities added straight into generated code. Indirect risks: exposure through how AI tools shape workflows, dependencies, and external connections. Both can create blind spots — and both demand visibility. Watch to learn how recognizing these layers helps secure your AI-driven workflows.

Best Application Security Testing Services to Know

Application Security Testing (AST) services use automated tools and manual techniques to find and fix security vulnerabilities in software, integrating security into the entire development lifecycle (SDLC) to prevent threats and protect applications from attacks. Key services include Static Application Security Testing (SAST) for code-level analysis, Dynamic Application Security Testing (DAST) for runtime testing, and Interactive Application Security Testing (IAST) which combines both.