Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Secrets in the Machine: Preventing Sensitive Data Leaks Through LLM APIs

In this webinar, we break down a simple but increasingly common problem: secrets leak wherever text flows, and modern LLM apps and agentic workflows are built to move text fast. We walk through concrete demos showing how API keys and passwords can surface through RAG-based assistants when secrets accidentally live in knowledge bases (tickets, docs, internal wikis). We also show why “just harden the system prompt” isn’t a reliable fix, and how output-only redaction can be bypassed (for example by simple formatting/encoding tricks). Most importantly, we explore real-world agent architectures.

When Seeing Isn't Believing: AI Images, Breaking News and the New Misinformation Playbook

In the early hours following reports of a U.S. military operation involving Venezuela, social media feeds were flooded with dramatic images and videos that appeared to show the capture of Venezuelan president Nicolás Maduro. Within minutes, AI-generated photos of Maduro being escorted by U.S. law enforcement, scenes of missiles striking Caracas, and crowds celebrating in the streets racked up millions of views across various social media channels. The problem?

AI-Enabled Cyber Intrusions: What Two Recent Incidents Reveal for Corporate Counsel

This article was authored by Daniel Ilan, Rahul Mukhi, Prudence Buckland, and Melissa Faragasso from Cleary Gottlieb, and Brian Lichter and Elijah Seymour from Stroz Friedberg, a LevelBlue company. Recent disclosures by Anthropic and OpenAI highlight a pivotal shift in the cyber threat landscape: AI is no longer merely a tool that aids attackers, in some cases, it has become the attacker itself.

Why AI security looks different across the UK, France, Germany, and Australia

Globally, 88% of companies regularly use AI in at least one business function—a 10% increase from the previous year. But as organizations race to adopt new capabilities, we’ve found that the rigor and maturity of AI governance vary widely by region. ‍ The third edition of our State of Trust report reveals how leading AI adopters outside the U.S.—from the UK to Germany, France, and Australia—are approaching AI security and governance in distinct ways.

The Silent Threat to the Agentic Enterprise: Why BOLA is the #1 Risk for AI Agents

In the race to deploy autonomous AI agents, organizations are inadvertently building on a foundation of shifting sand. While security teams have spent the last year focused on "Prompt Injection" and "Model Poisoning," a much older, more dangerous adversary has quietly become the primary attack vector for the agentic era: Broken Object Level Authorization (BOLA).

Model Context Protocol Server: The Universal Remote for AI Agents

The Model Context Protocol (MCP) is emerging as a foundational interoperability layer for agentic AI, embraced by major platform providers. MCP simplifies how AI models connect to external tools and data. Think of it as a universal remote for security platforms: Instead of building fragile, one-off integrations, MCP allows AI to discover and use capabilities dynamically. For SIEM and detection providers, this shift is significant.

AI Customer Service: Revolutionizing Customer Experiences

In today's fast-paced business world, providing exceptional customer support is no longer just a competitive advantage-it's a necessity. Companies increasingly turn to AI customer service solutions to meet rising customer expectations while optimizing operational efficiency. At Mindy Support, we specialize in combining cutting-edge artificial intelligence with human expertise to deliver seamless customer interactions. Our AI-driven tools enable businesses to handle inquiries promptly, provide personalized assistance, and maintain consistent quality across all communication channels.
Featured Post

Same Mission, Different Mindsets: CISOs and Incident Response Leaders in the Age of AI and Automation

When you work in cybersecurity, whether you're steering the operational team, or in a more strategic role, the mission is the same: protect the business. But when it comes to executing that mission, finding consensus on the best approach can be hard. At this pivotal point in the evolution of cybersecurity, as automation becomes table stakes and AI adoption accelerates, it is important that stakeholders are pulling in the same direction. However, recent ThreatQuotient research highlights real differences in how CISOs and Heads of IR approach the introduction of AI into cybersecurity strategy and practice.