Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

I Tried 5 Prompt Injection Attacks (Here's What Happened)

In this video, we explore the growing security risk of prompt injection in large language model (LLM) applications. As AI becomes embedded in more products, new vulnerabilities emerge, especially through natural language manipulation. We break down how LLMs work, the importance of system prompts, and demonstrate five real-world prompt injection techniques used to extract sensitive information or bypass safeguards. You’ll see live examples using different models and learn why newer models are more resilient, but still not immune.

GitHub Spark vs. Replit - Vibe Code Challenge

We pit GitHub Spark (in public preview) against Replit's AI agent. The challenge? Build a fully functional community forum for DIY tips from a single prompt. We compare design aesthetics, mobile responsiveness, login security, and deployment speed to see which tool creates a truly production-ready application. Which one do you think deserved the win? Let me know in the comments!

Lovable vs. Bolt - Vibe Code Challenge

Which AI tool is better for building a real app without writing code, Bolt or Lovable? In this video, I put both AI app builders head-to-head using the exact same prompt to create a DIY home repair forum. From database setup to authentication, UI design, publishing, and security checks, we compare how each platform performs in real time. The goal isn’t just to generate something that looks like an app, it’s to see whether these tools can actually create something usable, functional, and potentially production-ready. We evaluate.

AI on the Radar: Securing AI Driven Development

Join Vandana and Rob in this insightful webinar exploring the rapidly evolving landscape of AI security. As we shift from simple query-response models to complex autonomous agents that can plan, execute code, and access sensitive APIs, the traditional security "locks" are no longer sufficient. This session dives deep into the OWASP AI Exchange, a community-driven initiative providing practical guidance and technical controls for securing AI systems.

Cursor Composer 1.5 is Here: Is It Actually Better?

Is Cursor’s new Composer 1.5 model a major leap forward, or just a marginal update? Today, we’re putting the latest version of Cursor’s agentic AI to the test using our "Production-Ready Note App" prompt. We compare the speed, UI design, and agentic capabilities of 1.5 against version 1.0. Most importantly, we run a full security audit using the Snyk extension to see if the AI-generated code is actually safe for production.