Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Generative AI DLP: How Does It Work?

As generative AI tools like ChatGPT, Claude, and Gemini become essential to the modern workplace, they bring a new, invisible threat: the risk of sensitive data leaking through every prompt and interaction. Traditional DLP tools are no longer enough to protect proprietary code, PII, and trade secrets from being absorbed into public AI models. This guide explores the mechanics of generative AI DLP (Data Loss Prevention) and how it creates a safety net between your team and the AI apps they use.

Release 875: New Mac Features, Enhanced Monitoring, and Granular Data Mapping

This release delivers heavy-hitting updates to the Mac Agent, extends Windows monitoring into native desktop applications like WhatsApp, and provides administrators with more granular tools to manage data and triage security alerts. Here is a summary of the new features and improvements available in this release.

13 Real-life Insider Threat Examples

While many organizations focus on external threat actors, insider threats are a significant risk that can devastate a business from within. Because these individuals have legitimate access to a company’s systems, their actions — whether motivated by financial gain or caused by human error — often bypass security controls. And the problem is only getting worse. According to the Ponemon Institute, insider attacks increased by 47% from 2023-25.

Proofpoint DLP vs. Trellix DLP: Which is the Best Solution?

Proofpoint DLP and Trellix DLP are two notable data loss prevention solutions. In this blog, we’ll analyze both platforms in depth and see how they compare. We’ll also introduce Teramind as a compelling alternative that combines the best aspects of Proofpoint and Trellix, while offering additional tools that could increase your workforce’s safety and productivity.

What is Generative AI Security? Types, Risks & Best Practices

Generative AI security is the practice of protecting generative artificial intelligence models, applications, and their underlying training data from cyber attacks, data leakage, and unauthorized access. It focuses on securing both sides of the system—i.e., the AI itself (models, pipelines, APIs) and the sensitive data flowing into and out of it during real-world use.

How to Handle AI Policy Enforcement in the Era of Shadow AI

Here’s the reality most security teams are already living: over 80% of employees are using unapproved AI tools at work, and nearly half are actively hiding it from IT. The question facing every organization is no longer whether to adopt artificial intelligence — it’s how to secure the sensitive data flowing into it every single day. This is the governance gap.