Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

8.5 Billion Executions. 2 Real Bugs. Here's Why.

That is not a failure of fuzzing. It is a failure of interpretation. In a recent AFL++ fuzzing campaign targeting libarchive, we ran approximately 8.5 billion executions across all fuzzing phases, generated over a thousand crash files, and ultimately reduced them to two unique crash sites through structured crash triage and deduplication. This blog is a practical, engineering-first guide to that process: If your fuzzing pipeline stops at crash counts, you are not measuring security.

Your AppSec Pipeline Is Lying To You: More Vulnerabilities Security

357 crash reports. 2 actual bugs. That is not a typo. That is the reality of modern application security testing. In a recent fuzzing campaign, over a thousand crash files were generated across billions of executions. After crash deduplication and triage, that number collapsed to just two unique issues. Not hundreds of vulnerabilities. Not dozens of risks. Two. And yet, most security teams would have celebrated the initial numbers.

Flutter App Security Testing: Why most tools fail and what actually works

Most mobile security workflows end in a familiar way. A scan runs, a report is generated, and the output looks reassuring. There are no critical issues, maybe a few medium findings, nothing that blocks a release. The process completes, the team moves forward, and the app ships. At that moment, the assumption is clear. The app has been tested. The risk is understood. But there is a question that rarely gets asked, and it changes the entire conversation.

AI-driven DAST for mobile apps: The next evolution of Dynamic Security Testing

“AI-powered DAST” is everywhere. It signals progress, but assumes something fundamental was missing. It wasn’t. DAST struggled not from lack of intelligence, but from lack of depth. Most tools never reached inside authenticated, stateful, multi-step journeys where real logic, sensitive data, and critical vulnerabilities exist. That’s the part Appknox solved years ago. AI here is not a reset. It is an accelerator, applied to a system already operating where risk actually lives.

4 Phases, 357 Crashes, 2 Bugs: What AFL++ Campaign Actually Looks Like

357 crash files. 2 real bug sites. That’s the outcome of this AFL++ campaign after roughly 8.5 billion executions across multiple harnesses, binaries, and phases. At first glance, everything looked like success. Crashes were increasing steadily. New inputs were being generated every few seconds. Coverage appeared to improve over time. From a surface-level perspective, the campaign looked productive. Then triage began.

Gemini XSS Vulnerability: When AI Executes Malicious Code

Artificial intelligence is no longer just generating text. It generates and executes code in real time. With tools like Google Gemini, features such as code canvases and live previews are turning AI systems into interactive execution environments. This shift introduces a new and rapidly growing category of risk: AI security vulnerabilities tied to real-time code execution.

Android Component Security: Common Misconfigurations That Expose Mobile Apps

When teams think about Android app security, the focus is usually on code for encryption, obfuscation, or binary protection. But in practice, many of the most critical Android app vulnerabilities don’t originate in code at all. They come from misconfigurations. Issues in the AndroidManifest, insecure component exposure, and unsafe inter-app communication often create direct entry points for attackers. These are not edge cases. They are common, repeatable, and frequently exploited.

How to Implement Mobile AppSec in a CI/CD Pipeline

For many engineering teams, CI/CD security appears to be working. Static scans run automatically. Vulnerabilities are flagged. Security checks exist somewhere in the pipeline. Yet issues still surface after release. The reason is rarely the absence of tools. More often, it is the absence of structural enforcement across the build lifecycle. Security controls run inside the pipeline, but they do not always guarantee that the artifact being tested is the same artifact that ultimately reaches users.

Continuous Mobile Security Lifecycle: Appknox's Guide for Enterprise AppSec

Mobile app risk rarely emerges from negligence. It emerges from fragmentation. In most enterprises, security is applied in stages: Each control works in isolation. None governs how risk evolves over time. Mobile applications are distributed, long-lived systems. Once deployed, they operate outside centralized infrastructure control, exposed to shifting SDK dependencies, evolving APIs, regulatory change, and adaptive adversaries. Security gaps rarely appear within a stage. They appear in the transitions.

The Mobile AppSec Evaluation Guide for Security Leaders

Mobile security feels mature. Enterprises scan frequently, track findings, and report posture upward. Yet under regulatory scrutiny, cracks appear. This gap between perceived security and defensible governance is where mobile AppSec quietly fails. The illusion isn’t that security isn’t happening. It’s that it isn’t aligned with how regulated risk actually operates.