Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI in Cybersecurity Certification

Positive feedback can lead to unintended consequences. A dog learned that saving kids from the River Seine earned food and praise. So he started dragging them in to “save” them. AI models optimize for feedback in a similar way. Cato’s AI in Cybersecurity course shows how to manage the risks. It’s free and earns you CPE credits.

You Can Create a Convincing Deepfake in Under an Hour

A non technical user can produce a credible deepfake in under an hour using off the shelf tools and footage from normal video meetings. Common habits such as recording calls for later review give attackers enough material to train models, so every routine sales or onboarding call becomes potential training data. ⸻ For more information about us or if you have any questions you would like us to discuss email podcast@razorthorn.com. We give our clients a personalised, integrated approach to information security, driven by our belief in quality and discretion..

AppSec in the age of AI: An RSA Conference preview

Application security is at a breaking point as development teams move faster than ever, aided by AI-powered coding assistants. While these tools boost productivity, they also introduce subtle errors and insecure patterns at scale. The result: a growing backlog of vulnerabilities that outpaces traditional AppSec models. This webcast examines the risks and opportunities of AI in AppSec and who will be addressing it at RSA Conference. We’ll explore how defenders can use AI to level the playing field with automated scanning, intelligent prioritization, and secure-by-design practices.

How Artificial Intelligence (AI) Can Increase Threat Detection and Response

Security leaders are being squeezed from both sides. On one side, threat actors are scaling operations with AI automation, using it to craft more convincing social engineering attacks, accelerating reconnaissance, and improving lateral movement. On the other side, defenders are drowning in telemetry, suffering under staffing constraints, and facing the harsh reality that threat actors don’t keep business hours.

How Governments Use AI Safely | AI Governance Explained

How are governments using AI while protecting citizens’ data and privacy? In this episode of AI on the Edge, Ciara Maerowitz, Chief Privacy Officer for the City of Phoenix, explains how cities implement AI governance, manage bias, ensure transparency, and assess AI risks. Learn how responsible AI frameworks, policies, and risk management help governments safely adopt artificial intelligence.

Why Soft Guardrails Get Us Hacked: The Case for Hard Boundaries in Agentic AI

One recurring theme in my research and writing on agentic AI security has been the distinction between soft guardrails and hard boundaries. As someone who serves on the Distinguished Review Board for the OWASP Agentic Top 10, and who spends every day thinking about how to secure agents across enterprise environments at Zenity, this distinction is not academic. It is potentially the single most important conceptual framework practitioners need to internalize right now.

An AI Agent Didn't Hack McKinsey. Its Exposed APIs Did.

This week’s McKinsey incident should be a wake-up call for every enterprise moving fast to deploy AI. Not because AI itself is inherently insecure. But because too many organizations are still thinking about AI security at the model layer, while the real enterprise risk sits in the action layer: the APIs, MCP servers, internal services, and shadow integrations that AI agents can reach, invoke, and manipulate. That is the part most companies still do not see.

AI, Application Security, and the Illusion of Control

Over the past year, AI-generated code has moved from novelty to normal. Developers are shipping faster, prototyping faster, refactoring faster… sometimes without fully understanding what they just merged. From the outside, it looks like a productivity renaissance. From the inside, it feels like something else: a new kind of operational risk that doesn’t behave like the old kind.

How Security Teams Fight Back Against AI-Powered Hackers

Last month, the Mexican government was hacked. 150GB of government data was stolen, including 195 million taxpayer records. This attack exploited a couple of dozen vulnerabilities across ten institutions. In the past, this would have likely taken a skilled team months to crack. But of course, we’re living in a new age. This attack was executed by one person and their Claude Code assistant.