Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI Automation Dreams in the 2025 Security Budget Squeeze

The Razorwire Christmas Party 2025 review looks at rising expectations for AI and automation while security budgets stall in real terms. Automation in 2025 sits in a tug of war between cost cutting targets and the reality that attackers also use AI, so defensive upgrades have to match a live, adaptive threat.

Attackers Aren't Hacking Anymore - How Misconfigurations Became the Front Door

Looking for the perfect easy listening experience to kick off the holidays? We just published a full conversation between Garrett Hamilton, CEO & Co-Founder of Reach Security, and Todd Graham, Managing Partner at Microsoft’s venture fund M12. They talk through what's limiting security programs today — not lack of tools, but lack of operational clarity.

Vibe check your vibe code: Adding human judgment to AI-driven development

Remember when open meant visible? When a bug in open-source code left breadcrumbs you could audit? When you could trace commits, contributors, timestamps, even heated 2:13 a.m. debates on tabs versus spaces? That kind of openness created confidence in the code and made it possible to hold contributors accountable when issues arose. Today, as AI changes how code is created and shared, those familiar markers of trust and transparency are becoming harder to find.

Unlocking AI's Potential: Network Trends and Challenges

Artificial intelligence is no longer just an overused buzzword; it’s a fundamental shift in how businesses operate. The Architects of AI were just named as Time’s person of the year for 2025. From generative AI creating code to machine learning algorithms optimizing supply chains, the demand for AI is reshaping the technology landscape. But here’s the thing: all that computational power is useless if your data can’t move fast enough.

When Agentic AI Becomes an Attack Surface: What the Ask Gordon Incident Reveals

Pillar Security’s recent analysis of Docker’s Agentic AI assistant, Ask Gordon, offers an early glimpse into the security challenges organizations will face as AI systems begin operating inside the development stack. Their researchers discovered that a single poisoned line of Docker Hub metadata caused the agent to run privileged tool calls and quietly exfiltrate internal data.

AI and Data Security: Why Your Data Security Model Is Hurting Innovation

Why Your Data Security Model Is Outdated For over 20 years, we’ve focused on the Data Envelope—securing the perimeter, the cloud, and the network. But in a world of AI and rapid data sharing, protecting the envelope is not enough. In this video, James Rice (VP of Product Marketing at Protegrity) explains why traditional security has become the biggest bottleneck for modern innovation. Whether you are a security leader, a data architect, or a business innovator, understanding this paradigm shift is essential for the next decade of growth.

Protecting the Language of AI: Why API Security is No Longer Optional

Protecting the Language of AI: Why API Security is No Longer Optional As AI continues to reshape the digital landscape, APIs have become the "language" of innovation—but they've also become a massive target for attackers. In this clip from the A10 Networks webinar, "APIs are the Language of AI: Protecting Them is Critical," security experts Jamison Utter and Carlo Alpuerto discuss the complexities of modern API security.