Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Building DLP for a ChatGPT World

Generative AI has gone from a novelty to an essential part of daily workflows across all teams at an organization. Whether it’s ChatGPT, Microsoft Copilot, Claude, or Google Gemini, employees are using chatbots to copy, paste, summarize, and query data at a pace and scale we have never seen before. Unfortunately, data security has not been a fundamental feature of generative AI as the technology’s popularity and functionality has exploded.

Nightfall Product Updates & News: April 2025

Managing endpoint security just got easier with Nightfall. Our latest updates enhance device management for endpoint security and expand data protection to give security and IT teams greater control with less overhead. Here’s the latest updates and features from Nightfall at a glance: Read more on how these updates help keep your security posture strong while minimizing distractions and unnecessary alerts.

G2 Recognizes Nightfall as Data Loss Prevention (DLP) Leader for Spring 2025

Nightfall has been named a leader in Data Loss Prevention (DLP), Sensitive Data Discovery, Data Security, and Cloud Data Security in G2’s Spring ‘25 reports. We’d like to extend a huge thank you to all of Nightfall’s customers and supporters for making this possible - and an even bigger thank you goes to the Nightfall team’s tireless dedication to building solutions that protect our customers’ sensitive data across the sprawling enterprise attack surface.

Secure API Keys and Passwords with Nightfall's AI-Native DLP

API keys and passwords are the keys to digital kingdoms, granting access to an organization’s most valuable systems and data. Traditional data loss prevention (DLP) systems often fall short in their attempts to protect sensitive data and secrets, leaving security teams overwhelmed with false positives and noise. At Nightfall, we understand these challenges and the evolving threat landscape across SaaS and endpoints.

Insider Risk with Nightfall DLP: Episode 2 - Managing Shadow AI

Earlier this year, security researchers found more than 1 million records, including user data and API keys, in an exposed DeepSeek database. This massive exposure event tells us that data exfiltration risk and AI proliferation are forever linked together: as AI tools grow in popularity and complexity, exfiltration risk rises in kind.

The Essential DLP Checklist for Digital Health and Life Sciences

Security leaders in the life sciences and health technology fields know how important it is to safeguard sensitive data like protected health information (PHI), personally identifiable information (PII), and confidential research data. They also know what’s at stake with a security breach or data exfiltration event. But what’s not always clear is how to find the right solution to keep all that data safe.

Why the Future of DLP Is Invisible, Invincible, and Inexpensive

Legacy DLP solutions, as well as CASB and app-native DLP solutions, face significant challenges in providing comprehensive coverage across modern SaaS, AI apps, and endpoints. Lack of visibility, clumsy deployments, and expensive implementations are common drawbacks of using these tools — and they leave big gaps in data loss prevention. Even today, we’re still seeing the same problems that have persisted for decades in today’s DLP solutions.

Insider Risk with Nightfall DLP: Episode 1 - Prevent Personal Cloud Store Uploads

Insider risk is a tricky challenge for security teams: how can you tell the good actors from the bad, or intentional actions from mistakes? Anyone with approved access to endpoints and SaaS systems could expose data to exfiltration risk if those systems are focused solely on preventing outsiders from getting in.

How to Prevent Sensitive Data Exposure to AI Chatbots Like DeepSeek

With the rise of AI chatbots such as DeepSeek, organizations face a growing challenge: how do you balance innovative technology with robust data protection? While AI promises to boost productivity and streamline workflows, it can also invite new risks. Sensitive data—whether it’s customer payment information or proprietary research—may inadvertently end up in the prompts or outputs of AI models.