Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How the Social Engineering Toolkit Helps Red Teams

The Social Engineering Toolkit, or SET, is a tool that security teams use to copy the tricks that attackers use. It helps them see how well a company reacts when a message or link does not look legitimate. It can also test how people respond when they land on a copied website. Most guides cover only basic SET features. This blog explains how experts use SET in real tests and how defenders notice SET activity before harm occurs.

Beyond the Sprint: The Power of Continuous Automated Red Teaming (CART)

Malicious threat actors don’t work a 9-to-5 schedule, and they definitely don’t take a break when your organization’s annual security assessments are complete. Instead, they constantly put your security posture to the test—day after day, month after month, all year long. That’s why annual penetration tests and periodic validation campaigns are insufficient in today’s threat landscape.

Beyond the Breach: Why Continuous Automated Red Teaming (CART) is the Future of Cybersecurity

Security teams are under immense pressure. Traditional red teaming and annual penetration tests aren’t cutting it anymore. Breaches are no longer rare events; they’re expected. What matters now is what happens after the breach. Enter Continuous Automated Red Teaming (CART). CART is transforming how leading security teams approach validation, visibility, and readiness.

Inside the Adversary's Mind: How Cloudflare's Red Team Hacks to Defend

Get a behind-the-scenes look at Cloudflare’s Red Team with Dan Jones — a Senior Security Engineer who thinks like an attacker to strengthen defenses. In this preview of his Cloudflare Connect 2025 talk, Dan shares how offensive security helps protect millions of Internet properties.

Best AI Red Teaming Tools: Top 7 Solutions in 2025

There was a time when “AI red teaming” sounded like a novelty. Now, it’s fast becoming table stakes. If your organization is shipping machine learning or LLM-powered systems into the real world (especially in sensitive domains), you need to know how those systems behave under pressure. That’s where AI red teaming tools come in. These tools help teams stress-test AI the way it will actually be used (and misused).

Inside Identity Security - A Red Team Cybersecurity Documentary by Teleport

What happens when real attackers target your infrastructure —and your team has to defend it in real time? This 24-minute cybersecurity documentary takes you inside a high-stakes Red Team vs Blue Team exercise, where Persistent Security simulates an advanced attack on Teleport’s Identity Security team. As the defenders race to detect, respond, and protect their systems, the film reveals the pressure, strategy, and human dynamics behind modern threat detection.

What is AI Red Teaming?

AI red teaming is the process of simulating adversarial behavior to test the safety, security, and robustness of artificial intelligence systems. It draws inspiration from traditional cybersecurity red teaming (where ethical hackers emulate real attackers to expose flaws) but applies that mindset to machine learning models, data pipelines, and the broader AI stack.

Red Teaming Around the World (UK and Europe vs. US)

The differences between the US, the UK, and Europe are often minor but important regionally. Sometimes, we use different words to describe the same thing: French fries (US) vs. chips (UK) vs. pommes frites (France) are all fried potatoes. Sometimes, the same word can have different meanings, such as "football" and "football". Oddly, the same point holds true for Red Team testing.