Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Ultimate Guide to Open Source Security: Risks, Attacks & Defenses

Unlike closed-source code or proprietary applications, open source software (OSS) exposes its source code, allowing anyone to view, modify, or contribute to it. This transparency delivers both opportunities and unique threats; developer communities can uncover flaws faster, but attackers can also examine code for weaknesses and even easily leverage known reported open source vulnerabilities.

Mend.io Expands AI Native AppSec to Windsurf, CoPilot, Claude Code, and Amazon Q Developer

Today, Mend.io is expanding its AppSec capabilities to secure the five most popular agentic IDEs — including Windsurf, CoPilot, Claude Code, Amazon Q Developer, and Cursor — ensuring that developers can move at AI speed without compromising security.

Building Strong Container Security for Modern Applications

Containers have transformed how modern applications are built and deployed. They’re lightweight, portable, and allow teams to move software from development to production faster than ever before. But as adoption has accelerated, so have security concerns. From vulnerable base images to exposed Kubernetes clusters, container security has become a top priority for AppSec and DevSecOps professionals.

Why Security Can Be Stricter: A Zero Trust Approach to AppSec with AI | Mend.io

Is AI making application security easier or harder? We spoke to Amit Chita, Field CTO at Mend.io, the rise of AI agents in the Software Development Lifecycle (SDLC) presents a unique opportunity for security teams to be stricter than ever before. As developers increasingly use AI agents and integrate LLMs into applications, the attack surface is evolving in ways traditional security can't handle. The only way forward is a Zero Trust approach to your own AI models. Join Ashish Rajan and Amit Chita as they discuss the new threats introduced by AI and how to build a resilient security program for this new era.

Securing AI Applications in the Cloud: Shadow AI, RAG & Real Risks | Mend.io

What does it take to secure AI-based applications in the cloud? In this episode, host Ashish Rajan sits down with Bar-el Tayouri, Head of Mend AI at Mend.io, to dive deep into the evolving world of AI security. From uncovering the hidden dangers of shadow AI to understanding the layers of an AI Bill of Materials (AIBOM), Bar-el breaks down the complexities of securing AI-driven systems. Learn about the risks of malicious models, the importance of red teaming, and how to balance innovation with security in a dynamic AI landscape. What is an AIBOM and why it matters The stages of AI adoption.

Code Scanning in 2025: Why, How & the Role of Scanning in AI Security

Code scanning is the process of automatically analyzing source code to identify potential security vulnerabilities, bugs, and other code quality issues. It’s a crucial part of secure application development, helping teams detect and fix problems early in the software development lifecycle. Code scanning tools mainly use static analysis methods (examining code without running it), in contrast to dynamic analysis tools which analyze applications while they are running.

The AppSec Bottleneck: Why Fixing Can't Wait

Vulnerability detection isn’t the main problem - remediation is. In today’s fast-paced development world, security teams are overwhelmed with alerts, while developers struggle to keep up with security tasks that feel disconnected from their workflow. The real risk? Vulnerabilities that sit unaddressed in a growing backlog. Join Daniel Wyrzykowski, Product Manager at Mend.io and Saoirse Hinksmon, Senior Product Marketing Manager at Mend.io as they explore.

Mend.io is Recognized in the 2025 GartnerMagic Quadrant for Application Security Testing

The software security landscape is evolving faster than ever, and AI is accelerating this change. As generative and embedded AI become core to how software is developed, tested, and deployed, security must adapt to protect an entirely new layer of risk. At Mend.io, we’ve spent the past year reimagining what Application Security Testing (AST) looks like in this new reality.

LLM Security in 2025: Risks, Mitigations & What's Next

Large language model (LLM) security refers to the strategies and practices that protect the confidentiality, integrity, and availability of AI systems that use large language models. These models, such as OpenAI’s GPT series, are trained on vast datasets and can generate, translate, summarize, and analyze text. However, like any complex software component, LLMs present unique attack surfaces because they can be influenced by the data they process and the prompts they receive from users.

Top 7 SAST tools for DevSecOps Teams in 2025

SAST (Static Application Security Testing) tools are crucial for DevSecOps, enabling automated code analysis to identify vulnerabilities early in the development lifecycle. They analyze source code without execution, detecting issues like SQL injection, XSS, and buffer overflows. Popular SAST tools used by DevSecOps teams include Mend, Checkmarx, Snyk, Veracode, BlackDuck, SonarQube, and Semgrep. Integrating SAST into CI/CD pipelines ensures continuous security checks as code is developed.

42 DevOps Statistics to Know in 2025

DevOps is a set of practices that combines software development (Dev) and IT operations (Ops) to shorten the development lifecycle and deliver high-quality software continuously. It emphasizes collaboration, automation, and integration between previously siloed teams. DevOps aims to improve deployment frequency, reduce failure rates, and accelerate recovery times.

Proven Best Practices for Safer Code that Work: AppSec for the Win | Webinar Mend.io

In this session, Chris Lindsey discusses proven best practices for building a robust AppSec program, offering actionable insights for both developers and security teams. Chris, with over 35 years of experience in software development and 15+ years in security, shares strategies that helped him run a successful security program.

Beyond Traditional AppSec: Navigating the New Frontier of AI Security with Mend AI

Hear from Bar-El Tayouri, Head of Mend AI, about the urgent need for a new approach to securing AI-driven applications. From understanding novel AI components and their risks to implementing a comprehensive AppSec program, this episode provides actionable insights for organizations building with AI.

AI Code Review in 2025: Technologies, Challenges & Best Practices

AI code review leverages artificial intelligence models and machine learning techniques to analyze and provide feedback on source code, automating and improving the traditional code review process. It is crucial for software development workflows, offering significant advantages to developers and teams. AI code review can scan for bugs, style violations, security vulnerabilities, and other issues.