Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

5 Intriguing Ways AI Is Changing the Landscape of Cyber Attacks

In today's world, cybercriminals are learning to harness the power of AI. Cybersecurity professionals must be prepared for the current threats of zero days, insider threats, and supply chain, but now add in Artificial Intelligence (AI), specifically Generative AI. AI can revolutionize industries, but cybersecurity leaders and practitioners should be mindful of its capabilities and ensure it is used effectively.

WormGPT and FraudGPT - The Rise of Malicious LLMs

As technology continues to evolve, there is a growing concern about the potential for large language models (LLMs), like ChatGPT, to be used for criminal purposes. In this blog we will discuss two such LLM engines that were made available recently on underground forums, WormGPT and FraudGPT. If criminals were to possess their own ChatGPT-like tool, the implications for cybersecurity, social engineering, and overall digital safety could be significant.

The Risks and Rewards of ChatGPT in the Modern Business Environment

ChatGPT continues to lead the news cycle and increase in popularity, with new applications and uses seemingly uncovered each day for this innovative platform. However, as interesting as this solution is, and as many efficiencies as it is already providing to modern businesses, it’s not without its risks.

New AI Bot FraudGPT Hits the Dark Web to Aid Advanced Cybercriminals

Assisting with the creation of spear phishing emails, cracking tools and verifying stolen credit cards, the existence of FraudGPT will only accelerate the frequency and efficiency of attacks. When ChatGPT became available to the public, I warned about its misuse by cybercriminals. Because of the existence of “ethical guardrails” built into tools like ChatGPT, there’s only so far a cybercriminal can use the platform.

Software Security 2.0 - Securing AI Generated Code

The integration of machine learning into software development is revolutionizing the field, automating tasks and generating complex code snippets at an unprecedented scale. However, this powerful paradigm shift also presents significant challenges including the risk of introducing security flaws into the codebase. This issue is explored in depth in the paper Do Users Write More Insecure Code with AI Assistants? by Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh.

GenAI is Everywhere. Now is the Time to Build a Strong Culture of Security.

Since Nightfall’s inception in 2018, we’ve made it our mission to equip companies with the tools that they need to encourage safe employee innovation. Today, we’re happy to announce that we’ve expanded Nightfall’s capabilities to protect sensitive data across generative AI (GenAI) tools and the cloud. Our latest product suite, Nightfall for GenAI, consists of three products: Nightfall for ChatGPT, Nightfall for SaaS, and Nightfall for LLMs.

Worried About Leaking Data to LLMs? Here's How Nightfall Can Help.

Since the widespread launch of GPT-3.5 in November of last year, we’ve seen a meteoric rise in generative AI (GenAI) tools, along with an onslaught of security concerns from both countries and companies around the globe. Tech leaders like Apple have warned employees against using ChatGPT and GitHub Copilot, while other major players like Samsung have even go so far as to completely ban GenAI tools. Why are companies taking such drastic measures to prevent data leaks to LLMs, you may ask?