Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

From Data to Action: Key Insights About Advancing Security Practices

The cybersecurity landscape is in constant flux, shaped by emerging technologies, evolving threats, and increasing regulatory demands. As organisations strive to protect their digital ecosystems, the challenge isn’t just collecting data—it’s turning that data into actionable strategies that drive meaningful change. Next week, we’ll unveil the 16th edition of Veracode’s flagship State of Software Security (SoSS) report—a cornerstone of the cybersecurity calendar.

Open Source Supply Chain Security: Best Practices

Open-source components are the building blocks of modern software, enabling your team to innovate and deliver features faster. This reliance, however, introduces a significant challenge: your application’s security is now tied to a vast and complex supply chain of code you didn’t write. The risks are escalating, with attackers targeting open-source libraries to launch widespread breaches.

Secure AI Code Generation: From Policy to Practice

IIf you’re using AI to generate code, you’re likely moving faster than ever. You’ve probably felt that surge of productivity when a complex logic problem gets solved in seconds or boilerplate code appears instantly. But here is the problem: speed without guardrails creates security debt, and with AI, that debt accumulates at a terrifying rate. Recent data paints a concerning picture.

Veracode Named a Leader in GigaOm Radar for Software Supply Chain Security

Modern software development is a balancing act. You are under constant pressure to innovate faster, ship features daily, and maintain near-perfect uptime. To meet these demands, development teams rely heavily on open-source libraries, APIs, and third-party components. It’s efficient, but it introduces a significant challenge: your attack surface is now composed of code you didn’t write. Securing this complex web of dependencies—your software supply chain—is no longer optional.

Clawing For Scraps: Risks of OpenClaw AKA ClawdBot

The world of AI is still advancing rapidly, but so are the threats. Wherever you get your news, Clawdbot, or is it Moltbot, or is it now called OpenClaw(?) is everywhere lately. You can’t avoid talk of this AI personal assistant. It’s actually now called OpenClaw after some naming drama, and at the time of writing has 166k followers on GitHub. The repository also has an alarming number of forks, issues, and pull requests.

Managing Software Supply Chain Security for the AI Era

Artificial intelligence has fundamentally changed how we build software. Generative AI tools help developers write code faster, automate mundane tasks, and solve complex logic problems in seconds. But this speed comes with a hidden cost. When you accelerate development without adjusting your security posture, you inadvertently accelerate risk. Relying on AI-generated code and open-source packages in cloud environments can expose your organization to serious, often silent, vulnerabilities.

DevSecOps Tools for Continuous Security Integration

If you’re an engineering manager in 2026, it’s almost certain you’re already exploring DevSecOps tools… by necessity as much as by choice. The reasons are clear: security is no longer a side concern or a tick-box for regulated industries. Even non-regulated businesses now face rigorous customer security questionnaires, growing SOC 2 and supply chain requirements, and persistent threats (especially related to AI-generated code) that make security non-negotiable.

Veracode and Palo Alto Networks: Unify Application Risk from Code to Cloud

Software development has entered a new era. Applications are built and deployed faster than ever, powered by cloud-native architectures, open-source software, and AI-assisted development. But this speed has introduced a new challenge: a dramatically expanded attack surface and a fragmented security model that struggles to keep up.

How to Implement AI Code Generation Securely in Your SDLC

AI adoption is no longer a future state; it’s the current reality. According to the 2025 Stack Overflow Developer Survey, 84% of respondents are using or planning to use AI tools in their development process. But speed without guardrails creates debt, and in the case of AI, it creates security debt at an alarming rate. Recent data shows that nearly half of the time, AI assistants are likely introducing risky, known vulnerabilities directly into your codebase.