Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Shadow AI: From Hidden Threat to Organizational Challenge

This blog post is adapted from a recent episode of The Cloudcast podcast featuring Rohan Sathe, CEO and co-founder of Nightfall AI. Listen to the full conversation here. Your employees are uploading company documents to ChatGPT. Your healthcare teams are transcribing sensitive call recordings and feeding them into LLMs. Your finance department is pasting confidential spreadsheets into publicly accessible AI tools. And unless you have visibility into these workflows, you have no idea it's happening.

How AI Companies Can Use Data Lineage To Stop IP Theft - And Win When It Goes To Court

The 21st-century gold rush is the AI boom, and it is producing a wave of emerging AI companies. Being the first to build and apply AI in novel ways successfully is the difference between success and failure. Because of this, companies can find themselves making a trade-off between time-to-market and security.

Considerations for Microsoft Copilot Studio vs. Foundry in Financial Services

Financial services organizations are increasingly turning to AI agents to drive productivity, automate workflows, and deliver an innovative edge. Within the Microsoft ecosystem, two agentic platforms, Copilot Studio and Foundry, are paving new paths for agent development and deployment. Despite their shared vision for enterprise AI, their differences have important implications for user groups, agent capabilities, and security priorities.

How AI-Driven Attacks Are Putting Gmail Security At Risk

Gmail has always been a common target for cybercriminals, and with the arrival of advanced AI tools, the threat level has increased significantly. Now, attackers no longer rely on generic phishing emails or scam methods. They are using AI to create convincing messages and imitate real support agents to make attacks look more genuine. This change in attack patterns has made Gmail users more vulnerable because they can’t differentiate between real and fake messages.

How Enterprise CPG Companies Can Safely Adopt LLMs Without Compromising Data Privacy

A major publicly traded CPG company wanted to adopt LLM to improve performance marketing, analytics, and customer experience. However, the IT team blocked AI usage and uploads to external AI tools as interacting with public AI models could expose sensitive brand, consumer, and financial data. This isn’t an isolated problem. It’s a pattern across enterprises: business agility collides with security requirements.

Inside the Agent Stack: Securing Azure AI Foundry-Built Agents

This blog kicks off our new series, Inside the Agent Stack, where we take you behind the scenes of today’s most widely adopted AI agent platforms and show you what it really takes to secure them. Each installment will dissect a specific platform, expose realistic attack paths, and share proven strategies that help organizations keep their AI agents safe, reliable, and compliant.

Are we on the path to AI defenders vs. AI attackers?

Swarms of AI bots are now being used to continuously test security perimeters. In this episode, Michael Baker, VP and Global CISO at DXC Technology, discusses the shift to AI-driven security operations. He recently met with startups working on agentic pentesting to find vulnerabilities before bad guys do. The advantage? You control these bots and get immediate feedback. The threat? Adversaries are building the exact same capabilities right now.

Proactively Identify and Eliminate Defensive Weaknesses with Cybersecurity Domain-Specific AI

AI is everywhere. I live in San Francisco, and a day doesn’t go by that I don’t see a billboard, an advertisement on the side of a bus, or a tech bro’s hoodie with two big letters on it: AI. It’s no different in cybersecurity marketing – AI terminology is everywhere. But too often, it’s tacked on as a buzzword – a thin layer washed on top of existing security tools, with little real impact. This makes it tricky to decipher what’s real and what’s hype.