Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Latest Posts

Nightfall Named A Leader in Data Loss Prevention (DLP) by G2

Nightfall has been named a Leader in Data Loss Prevention (DLP), Sensitive Data Discovery, Data Security, and Cloud Data Security in G2’s Summer ‘24 reports. We’d like to extend a huge thank you to all of Nightfall’s customers and supporters for making this possible. We’re also happy to acknowledge the Nightfall team’s tireless innovation, all in pursuit of helping customers to protect their sensitive data across the sprawling enterprise attack surface.

Is Slack using your data to train their AI models? Here's what you need to know.

AI is everywhere—but how can you be sure that your data isn’t being used to train the AI models that power your favorite SaaS apps like Slack? This topic reached a fever pitch on Hacker News last week, when a flurry of Slack users vented their frustrations about the messaging app’s obtuse privacy policy. The main issue?

Building your own AI app? Here are 3 risks you need to know about-and how to mitigate them.

After the debut of ChatGPT, and the ensuing popularity of AI, many organizations are leveraging large language models (LLMs) to develop new AI-powered apps. Amidst this exciting wave of innovation, it’s essential for security teams, product managers, and developers to ensure that sensitive data doesn’t make its way into these apps during the model-building phase.

5 things you need to know to build a firewall for AI

Everywhere we look, organizations are harnessing the power of large language models (LLMs) to develop cutting-edge AI applications like chatbots, virtual assistants, and more. Yet even amidst the fast pace of innovation, it’s crucial for security teams and developers to take a moment to ensure that proper safeguards are in place to protect company and customer data.

4 key takeaways from the 2024 Verizon Data Breach Investigations Report

It’s that time of year again: The 2024 Verizon Data Breach Investigations Report is back with the top trends in security breaches over the past year. Read on for an at-a-glance look of some of the report’s most interesting—and actionable—findings.

Top 5 SaaS misconfigurations to avoid and why

Cloud storage services and SaaS apps like Google Drive and Microsoft OneDrive provide convenient, scalable solutions for managing documents, photos, and more—making them indispensable for modern work and personal life. However, misconfigured settings and permissions can lead to serious security breaches, noncompliance, and even the loss of customer trust. Let’s explore the 5 most common misconfiguration issues with real-world examples.

Here's what caused the Sisense data breach-and 5 tips for preventing it

From Uber in 2016 to Okta in 2023 to Sisense in 2024, it’s evident that there’s a pattern behind the tech industry’s most devastating breaches: Data sprawl. Let’s dive into how data sprawl played a part in last week’s Sisense breach, as well as how security teams can be proactive in defending against similar attacks.

Nightfall named a "Data Security Solution of the Year"

We’re thrilled to announce that Nightfall was selected as the “Data Security Solution of the Year” in the 2024 Data Breakthrough Awards. With enterprises scrambling to stay on the cutting edge of innovation, it’s all too easy to lose sight of data stewardship. In addition to SaaS apps, email, and endpoints, now enterprises must also safeguard their generative AI (GenAI) applications, including both custom and third-party GenAI tools.

Securing AI with Least Privilege

In the rapidly evolving AI landscape, the principle of least privilege is a crucial security and compliance consideration. Least privilege dictates that any entity—user or system—should have only the minimum level of access permissions necessary to perform its intended functions. This principle is especially vital when it comes to AI models, as it applies to both the training and inference phases.