Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The latest News and Information on Data Security including privacy, protection, and encryption.

LLM Application for Protegrity AI Developer Edition

Securing LLM Workflows with Protegrity AI Developer Edition Learn how to protect sensitive data and prevent malicious prompt injections in your AI applications. In this technical walkthrough, Dan Johnson, Software Engineer at Protegrity, demonstrates a dual-gate security architecture designed to safeguard Large Language Models. Discover how to implement a security gateway that sits between your users and your LLM. This demonstration covers the integration of semantic guardrails and classification APIs to ensure data privacy and system integrity.

Jupyter Notebook for Protegrity AI Developer Edition

Want to test Protegrity’s data protection features without any local installation? In this tutorial, Dan Johnson shows you how to make your first protect and unprotect API calls directly in your browser using our interactive Jupyter Notebook (Binder). This is the fastest way to see Protegrity’s Python SDK in action—authenticating, applying protection policies, and maintaining data utility in real-time.

Top tips: What happens to your data after you click "Accept"

Top tips is a weekly column where we highlight what’s trending in the tech world and share ways to stay ahead. This week, we’re talking about a moment that’s become second nature to most of us. You open a website or install a new app. A banner appears. It’s long, filled with links, and clearly not meant to be read in a hurry. Your eyes jump straight to the familiar buttons. Accept all. One click, and you’re in. It feels harmless.

Redefining Data Security: From Insight to Action

Most organizations don't lack data security tools, they lack cohesion. Teams often layer DSPM solutions for discovery and classification on top of DLP tools for enforcement. On paper, this looks comprehensive. In practice, it creates friction: This is the platform problem: technology stitched together, not designed together. Solving it requires more than integrations, it requires a purpose-built platform that combines visibility, control, and action across all states of data.

Forensic Search & App Intelligence Add Up to Complete Insider Risk Visibility

Traditional data loss prevention stops at detection. You get an alert. You know something happened. But you don't see the full picture. When a departing engineer downloads your entire codebase over the holiday break, you need more than a policy violation. You need to see what they were doing before that moment, where the data came from, and what happened after. You need context, timeline, and the ability to trace every action.

Comprehensive Data Exfiltration Prevention: A New Architecture for Modern Threats

The exfiltration problem has evolved beyond what traditional DLP was designed to solve. Your employees work across personal AI assistants, multiple browsers, dozens of SaaS applications, and offline environments. They collaborate through Git, communicate via email clients, and store data on external drives. Each interaction represents a potential data loss vector—and legacy solutions can't see most of them.

The Nike Breach, Why Traditional DLP Failed, & What Security Teams Need Now

When WorldLeaks claimed to have exfiltrated 1.4TB of Nike's corporate data—188,347 files containing everything from product designs to manufacturing workflows—the incident revealed something more significant than another headline-grabbing breach. It exposed a fundamental gap in how organizations approach data loss prevention. The breach reportedly included technical packs, bills of materials, factory audits, strategic presentations, and six years of R&D archives.

The CISA ChatGPT Incident Makes the Case for AI-Native DLP

The acting director of America's Cybersecurity and Infrastructure Security Agency—the person tasked with defending federal networks against nation-state adversaries—triggered multiple automated security warnings by uploading sensitive government documents to ChatGPT. If this happened at CISA, it can happen at your organization too.

Entity Detection Plus Protection: Nightfall's New Approach to Comprehensive DLP

For years, data loss prevention has meant one thing: finding sensitive entities. Social Security numbers, credit card numbers, API keys—if you could pattern-match it, you could protect it. But this approach has always had fundamental limits. What happens when you need to protect customer IDs unique to your business? What about proprietary source code that doesn't contain any traditional PII?

Detect human names in logs with ML in Sensitive Data Scanner

Modern applications generate a constant stream of logs, some of which carry more information than they should. For too many organizations, logs include personally identifiable information (PII) such as customer names that were never meant to leave production systems. Teams try to limit this data exposure by using regular expressions to detect and obfuscate matches, only to discover that names like John O’Connor, Mary-Jane, Jane van der Meer, and A. García slip through.