Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Netwrix's Culture of Innovation- Unleashing AI

Netwrix’s culture of innovation thrives on curiosity, collaboration, and accountability. From integrating AI across development and customer experience to fostering cross-team creativity, innovation here moves sideways as much as it does down. During Innovation Week, leaders explore how AI and the 1Secure Platform are redefining data and identity security for the future.

LLM guardrails: Best practices for deploying LLM apps securely

Prompt guardrails are a common first line of defense against client-level LLM application attacks, such as prompt injection and context poisoning. They’re also a critical component of a full defense-in-depth strategy for LLM security at the infrastructure, supply chain, and application level. The specific guardrails that teams implement depend highly on use case, but they are typically designed to.

30+ due diligence questions to ask AI vendors in a security review

Introducing third-party AI into your systems can be a milestone for productivity and growth, but it also expands your attack surface in unpredictable ways. If your AI vendors have weak controls, threats like data poisoning and algorithm failure can ripple through your systems.

What Technologies Make Online Money Transfers Secure?

A 2022 report by the Bank for International Settlements suggests that about $7.5 trillion is transferred daily around the globe. For context, the U.S. federal government spent $7.01 trillion in its 2025 fiscal year, which ran from October 2024 to September 2025, according to the U.S. Treasury Fiscal Data. Basically, this implies that about 7% more money is traded on the foreign exchange market daily than the U.S federal government spends annually.

Zenity Labs & MITRE ATLAS Collaborate to Advance AI Agent Security with the First Release of Agent-Focused TTPs

Zenity Labs worked in collaboration with MITRE ATLAS to incorporate the first 14 agent-focused techniques and subtechniques, extending the framework beyond LLM threats to cover the unique risks posed by AI agents.

AI at Work: How Egnyte Intelligence Goes Beyond Generic Tools

AI isn’t the future, it’s here. Your CEO’s talking about it in board meetings. Your manager wants to know if it'll save time or just add more work. And you? You're wondering if it's going to make your job easier or just add noise. The excitement is justified. McKinsey says nearly 80% of companies are using AI somewhere in their business. But here's what most people miss: very few have gotten it to work across their entire organization. Why?

CVE-2025-6515 Prompt Hijacking Attack - How Session Hijacking Affects MCP Ecosystems

JFrog Security Research recently discovered and disclosed multiple CVEs in oatpp-mcp – the Oat++ framework’s implementation of Anthropic’s Model Context Protocol (MCP) standard. Among these, CVE-2025-6515 stood out due to its potential threat of hijacking MCP session IDs. Within the context of MCP we’ve dubbed this new attack technique “Prompt Hijacking“. Your browser does not support the video tag.

Are we only one prompt away from using AI for evil? #cybersecurity #ai #infosec

Are we only one prompt away from using AI for evil? In this week's episode of The Cybersecurity Defenders Podcast, we explore a concerning reality about AI and cybersecurity. As AI becomes more prevalent within the threat actor community, exploits are being developed faster than humans can patch. The tools that help developers debug code can just as easily be used to weaponize vulnerabilities.

AI Privacy and Security: Key Risks & Protection Measures

AI systems learn from vast amounts of data and then generalize. That power is useful and also risky. Sensitive data can slip into prompts. Proprietary datasets can be memorized by models. Attackers can steer models to reveal secrets or corrupt results. Meanwhile, your company is probably experimenting with multiple AI tools at once. That creates hidden data flows and inconsistent controls. “Traditional” app security isn’t enough.