CISOs aren’t losing sleep over zero-days. The real threat? Their own employees. From phishing to prompt injection, discover what keeps security leaders up at night.
AI is changing everything—but with innovation comes new risks. In this episode of AI on the Edge, we dive deep into OWASP's Top 10 for Large Language Models with security leader Steve Wilson (Exabeam). Discover why every tech company is suddenly talking about LLM security and how you can stay ahead. Inside this episode: Why traditional security doesn’t work for AI Learn from Steve’s new book The Developer’s Playbook for LLM Security and get actionable tips to protect your AI systems.
Large language models take in unstructured data. They transform it into context, embeddings, and answers. That journey touches raw files, vector stores, model logs, and third-party services. Traditional privacy programs focus on databases and forms. LLMs push risk to the edges. The riskiest moments are when you ingest messy content, when your system retrieves chunks to support an answer, and when an agent with tool access is tricked into over-sharing.
AI agents now move data, collaborate, and make decisions at machine speed — millions of actions per second. But our entire security architecture was built for humans, not for autonomous AI. In this new Agentic World, every action is faster, every breach more invisible, and every compliance gap more dangerous. Protecto introduces Agentic Controls — intelligent, context-aware CBAC Agents that live inside AI workflows. They understand policies written in plain English, enforce zero-trust decisions before data ever leaves its boundary, and protect privacy across industries.
AI On The Edge – Where Intelligence Meets Risk: Part 2 Join Amar Kanagaraj, Founder & CEO of Protecto, and Sabry Loganathan, Head of AI Strategy at Peloton, as they discuss how a chance project evolved into building world-class generative AI systems.
India’s Digital Personal Data Protection Act, 2023 (DPDP Act) is finally moving toward activation. In January 2025 the government published the Draft Digital Personal Data Protection Rules, 2025 for public consultation to operationalize the Act. As of late 2025, the Act is enacted but core provisions still await final notification, so a phased rollout remains likely.
Amar (Founder & CEO of Protecto) chats with Sabari Loganathan (Head of AI Strategy, Peloton) about how a chance project led to building world-class generative AI systems. From vector search to agentic AI and RAG, discover how Sabari turned technical breakthroughs into real enterprise outcomes.
Language models now touch contracts, tickets, CRM notes, recordings, and code. That means personal data, trade secrets, and regulated content move through prompts, embeddings, caches, and third-party endpoints. If your audit still reads like a generic security review, you will miss the places where leaks actually happen. A modern LLM Privacy Audit Framework starts where the risk starts.
Large language models are no longer side projects. Sales teams rely on them for emails, support teams for ticket summaries, legal for first-draft reviews, and product teams for search and personalization. That ubiquity changes the risk math. Sensitive information flows through prompts, fine-tuning sets, retrieval indexes, analytics stores, and vendor logs. Regulators now expect the same discipline for LLM pipelines that they expect for core systems handling customer data.
AI On The Edge: Scaling AI - Part 4 Hosted by Amar Kanagaraj, CEO of Protecto, chats with Manoj Mohan, a veteran AI leader who has built large-scale data & AI platforms for Intuit, Meta, and Apple.