Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI Principles in Practice: Auditability in non-negotiable

When AI acts on your behalf, auditability is non-negotiable. In the latest Principles in Practice video, Anand Srinivas, 1Password VP of Product & AI, explains why every AI agent action involving credentials must leave a clear audit trail: Who approved the access and why When and where were credentials used What did the agent access and when Did access end when the task was completed Without auditability, AI usage can break trust between employees, security teams, customers, and regulators.
Featured Post

Innovation at Speed: Why Machine Identity Security Is Now a Boardroom Priority

CEOs across the manufacturing sector remain optimistic about the potential of digital transformation to boost productivity, efficiency, and competitiveness. Yes - manufacturers face a double bind - innovate fast (and potentially feel pain) or risk falling behind; but every step forward expands the attack surface. This sits alongside a stark reality: the manufacturing sector now suffers 26% of all cyberattacks, making it one of the most targeted industries globally. However, the most significant emerging threat is not always the one that leaders expect.

ChatGPT Oopsies Series of Information - The 443 Podcast - Episode 356

This week on the podcast, we cover a Politico report detailing a security lapse at CISA in the United States involving sensitive data and a public version of ChatGPT. Next, we dive into a couple of recently resolved vulnerabilities in the SolarWinds Web Help Desk application. Finally, we end with some closure on a story about two Coalfire penetration testers who were arrested several years ago for completing a penetration test in Iowa.

Claude Code writes and tests Cobalt Strike detection rules #cybersecurity #ai #securityoperations

Watch Claude Code generate production-ready Cobalt Strike detection rules in LimaCharlie. The agent defines detection requirements, creates rule logic for high-signal patterns, validates syntax, and deploys rules to the tenant. Named-pipe indicators and process-based signatures are tested against positive and negative controls to confirm accuracy. Security teams can operationalize threat-specific detections in minutes instead of hours.

The AI Blind Spot Debt: The Hidden Cost Killing Your Innovation Strategy

In today’s AI rush, I’ve seen even the most disciplined organizations find it almost impossible to apply the hard-won lessons of DevOps and DevSecOps onto AI adoption. These organizations often feel forced to choose between moving fast and staying in control. As a result, they develop a “wait and see” approach to AI usage and implementation, and it’s creating a new, more dangerous form of technical debt. I call it the AI Blind Spot Debt.

Cyberhaven DSPM: Uniting DSPM & DLP to Secure Data in the AI Era

Enterprise security programs were built for a time when data lived in a small number of predictable locations. That model no longer holds. Today, data is constantly created, copied, transformed, and shared across cloud applications, endpoints, on-prem systems, and generative AI tools, often without clear visibility. Protecting data in the AI era requires three pillars: holistic visibility across the full data lifecycle, a deep understanding of data with context (e.g.

When AI Agents Create Their Own Reddit: Moltbook Highlights Security Risks in the Agentic Action Layer

A new platform, Moltbook, has attracted significant attention within the AI community. It is not famous because humans are posting there, but because autonomous AI agents are. Moltbook is a social network designed for AI agents to post, comment, upvote, and even form communities. Humans can observe these interactions but cannot participate. This experiment reveals a striking reality. AI agents are coordinating, sharing code, and developing complex cultures without human visibility.

The Prescriptive Path to Operationalizing AI Security

In introducing the AI Security Fabric, we have outlined how security must evolve as software is built by humans, models, and autonomous agents working at machine speed. The Fabric defines the architectural shift required to build trust at AI speed, delivered through the Snyk AI Security Platform. We’re now focusing on the next question: how organizations put that vision into practice. Operationalizing AI security is not about enabling a single feature or deploying a tool.

Introducing the AI Security Fabric: Empowering Software Builders in the Era of AI

Today, we’re thrilled to introduce the AI Security Fabric, delivered through the Snyk AI Security Platform, and operationalized through a prescriptive path for AI security. As software creation shifts to humans, models, and autonomous agents working together at machine speed, security must evolve just as fundamentally. The AI Security Fabric defines the new paradigm, and the Prescriptive Path shows how the Snyk AI Security Platform gets you there.