Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

The Risks and Rewards of ChatGPT in the Modern Business Environment

ChatGPT continues to lead the news cycle and increase in popularity, with new applications and uses seemingly uncovered each day for this innovative platform. However, as interesting as this solution is, and as many efficiencies as it is already providing to modern businesses, it’s not without its risks.

New AI Bot FraudGPT Hits the Dark Web to Aid Advanced Cybercriminals

Assisting with the creation of spear phishing emails, cracking tools and verifying stolen credit cards, the existence of FraudGPT will only accelerate the frequency and efficiency of attacks. When ChatGPT became available to the public, I warned about its misuse by cybercriminals. Because of the existence of “ethical guardrails” built into tools like ChatGPT, there’s only so far a cybercriminal can use the platform.

Software Security 2.0 - Securing AI Generated Code

The integration of machine learning into software development is revolutionizing the field, automating tasks and generating complex code snippets at an unprecedented scale. However, this powerful paradigm shift also presents significant challenges including the risk of introducing security flaws into the codebase. This issue is explored in depth in the paper Do Users Write More Insecure Code with AI Assistants? by Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh.

GenAI is Everywhere. Now is the Time to Build a Strong Culture of Security.

Since Nightfall’s inception in 2018, we’ve made it our mission to equip companies with the tools that they need to encourage safe employee innovation. Today, we’re happy to announce that we’ve expanded Nightfall’s capabilities to protect sensitive data across generative AI (GenAI) tools and the cloud. Our latest product suite, Nightfall for GenAI, consists of three products: Nightfall for ChatGPT, Nightfall for SaaS, and Nightfall for LLMs.

Worried About Leaking Data to LLMs? Here's How Nightfall Can Help.

Since the widespread launch of GPT-3.5 in November of last year, we’ve seen a meteoric rise in generative AI (GenAI) tools, along with an onslaught of security concerns from both countries and companies around the globe. Tech leaders like Apple have warned employees against using ChatGPT and GitHub Copilot, while other major players like Samsung have even go so far as to completely ban GenAI tools. Why are companies taking such drastic measures to prevent data leaks to LLMs, you may ask?

ChatGPT DLP (Data Loss Prevention) - Nightfall DLP for Generative AI

**ChatGPT Data Leak Prevention (DLP) by Nightfall AI: Prevent Data Leaks and Protect Privacy** ChatGPT is a powerful AI utility that can be used for a variety of tasks, such as generating text, translating languages, and writing different kinds of creative content. However, it is important to use ChatGPT safely and securely to prevent data leaks, protect privacy, and reduce risk.

Code Mirage: How cyber criminals harness AI-hallucinated code for malicious machinations

The landscape of cybercrime continues to evolve, and cybercriminals are constantly seeking new methods to compromise software projects and systems. In a disconcerting development, cybercriminals are now capitalizing on AI-generated unpublished package names also known as “AI-Hallucinated packages” to publish malicious packages under commonly hallucinated package names.

How Torq Socrates is Designed to Hyperautomate 90% of Tier-1 Analysis With Generative AI

Artificial intelligence (AI) has generated significant hype in recent years, and separating the promise from reality can be challenging. However, at Torq, AI is not just a concept. It is a reality that is revolutionizing the SOC field, specifically in the area of Tier-1 security analysis, especially as cybercriminals become more sophisticated in their tactics and techniques. Traditional security tools continue to fall short in detecting and mitigating these attacks effectively, particularly at scale.