Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

What Is Red Team Penetration Testing?

Red Team Penetration Testing is a simulated cyberattack that mimics real-world threat behavior to identify vulnerabilities, test defenses, and evaluate how effectively an organization can detect and respond to an attack. It goes beyond traditional testing by focusing on how an attacker would actually move through an environment.

From Discovery to Defense: Why AI Red Teaming Is the Next Step After AI-SPM

This week, we announced the general availability of Evo AI-SPM, the first operational layer of Snyk’s AI Security Fabric. AI-SPM gives security teams something they’ve never had before: a system of record for AI risk, with the ability to discover models, frameworks, datasets, and agent infrastructure embedded directly in code. For many organizations, that discovery step is a breakthrough.

Beyond the Hype: Navigating the Security Risks and Safeguards of Generative AI Video

The rapid evolution of generative AI video models, such as Seedance 2.0, Kling 3.0 and OpenAI's Sora, has unlocked unprecedented creative potential. However, for cybersecurity professionals, these advancements represent a significant expansion of the corporate attack surface. In an era where "seeing is no longer believing," the integration of synthetic media into the enterprise workflow demands a rigorous security framework. This article explores the dual nature of AI video: the sophisticated threats it enables and how modern, enterprise-grade platforms are architecting defenses to mitigate these risks.

AI red teaming with John V.

Join us for this session of Defender Fridays as we explore AI red teaming with John V., AI risk, safety, and security specialist at the Institute for Security and Technology (IST). At Defender Fridays, we delve into the dynamic world of information security, exploring its defensive side with seasoned professionals from across the industry. Our aim is simple yet ambitious: to foster a collaborative space where ideas flow freely, experiences are shared, and knowledge expands.

BreachLock Expands Adversarial Exposure Validation (AEV) to Web Applications

BreachLock, a global leader in offensive security, today announced that its Adversarial Exposure Validation (AEV) solution now supports autonomous red teaming at the application layer, expanding beyond its initial network-layer capabilities introduced in early 2025.

LLM Red Teaming: Threats, Testing Process & Best Practices

LLM red teaming is a proactive security practice that involves systematically testing large language models (LLMs) with adversarial inputs to find vulnerabilities before deployment. By using manual or automated methods to probe for weaknesses, red teamers can identify issues like harmful content generation, bias, or security exploits, which are then addressed through a continuous “break-fix” loop to improve the model’s safety and reliability.

Automated Red Teaming: Capabilities, Pros/Cons, and Latest Trends

Automated red teaming uses software to simulate cyberattacks and test security defenses, helping organizations find and fix vulnerabilities more efficiently. It automates tasks like credential harvesting, system enumeration, and privilege escalation to test security posture in a continuous, scalable manner. Beyond traditional systems, automated red teaming can also be used for AI systems, where it tests for risks like data poisoning or prompt injection in generative models.