Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Securing the Agent Skills Registry: How Snyk and Tessl Are Setting the Standard

Agent skills are becoming the building blocks of AI-native software development, giving coding agents structured, versioned context, like how to use your APIs, how to build in your codebase, and how to enforce your team's policies. Developers install them from registries the same way they install npm packages or Python libraries. But unlike npm or PyPI, the agent skills ecosystem is new.

I Read Cursor's Security Agent Prompts, So You Don't Have To

Cursor's security team built four autonomous agents that review 3,000+ PRs per week, catch 200+ vulnerabilities, and open fix PRs automatically. The engineering is impressive, and the prompts are shockingly simple. But there's a meaningful gap between "LLM agents reviewing PRs" and "enterprise security program," and that gap is exactly where things get interesting.

Lovable vs. Bolt - Vibe Code Challenge

Which AI tool is better for building a real app without writing code, Bolt or Lovable? In this video, I put both AI app builders head-to-head using the exact same prompt to create a DIY home repair forum. From database setup to authentication, UI design, publishing, and security checks, we compare how each platform performs in real time. The goal isn’t just to generate something that looks like an app, it’s to see whether these tools can actually create something usable, functional, and potentially production-ready. We evaluate.

The 89% Problem: How LLMs Are Resurrecting the "Dormant Majority" of Open Source

AI coding assistants are quietly resurrecting millions of abandoned open source packages. For the last decade, developers relied on a simple heuristic for open source security: Prevalence \= Trust. If a package was downloaded millions of times a week (lodash, react, requests), we assumed it was "safe enough" because thousands of eyes were on it. If it was obscure, we approached with caution.

AI on the Radar: Securing AI Driven Development

Join Vandana and Rob in this insightful webinar exploring the rapidly evolving landscape of AI security. As we shift from simple query-response models to complex autonomous agents that can plan, execute code, and access sensitive APIs, the traditional security "locks" are no longer sufficient. This session dives deep into the OWASP AI Exchange, a community-driven initiative providing practical guidance and technical controls for securing AI systems.

The Rise of the AI Security Engineer: A New Discipline for an AI-Native World

We are witnessing the birth of a new profession in the blend of security engineering and security operations, a discipline that didn't exist five years ago because the systems it protects didn't exist five years ago. As artificial intelligence moves from experimental to essential and agentic systems begin to perceive, reason, act, and learn autonomously, we need defenders who can operate at the same velocity. I'm talking about the AI Security Engineer.

Claude Code Security: A Welcome Evolution in the Remediation Loop

AI accelerates discovery — but enterprise trust still depends on deterministic validation, remediation automation, and governance at scale. Last Friday, Anthropic launched Claude Code Security, powered by Opus 4.6, inside Claude Code. The demo is impressive: Frontier AI reasoning scanned open source codebases and surfaced over 500 previously unknown high-severity vulnerabilities — including subtle heap buffer overflows that had survived decades of expert review and fuzzing.

Cursor Composer 1.5 is Here: Is It Actually Better?

Is Cursor’s new Composer 1.5 model a major leap forward, or just a marginal update? Today, we’re putting the latest version of Cursor’s agentic AI to the test using our "Production-Ready Note App" prompt. We compare the speed, UI design, and agentic capabilities of 1.5 against version 1.0. Most importantly, we run a full security audit using the Snyk extension to see if the AI-generated code is actually safe for production.