Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Latest News

New AI Bot FraudGPT Hits the Dark Web to Aid Advanced Cybercriminals

Assisting with the creation of spear phishing emails, cracking tools and verifying stolen credit cards, the existence of FraudGPT will only accelerate the frequency and efficiency of attacks. When ChatGPT became available to the public, I warned about its misuse by cybercriminals. Because of the existence of “ethical guardrails” built into tools like ChatGPT, there’s only so far a cybercriminal can use the platform.

Software Security 2.0 - Securing AI Generated Code

The integration of machine learning into software development is revolutionizing the field, automating tasks and generating complex code snippets at an unprecedented scale. However, this powerful paradigm shift also presents significant challenges including the risk of introducing security flaws into the codebase. This issue is explored in depth in the paper Do Users Write More Insecure Code with AI Assistants? by Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh.

GenAI is Everywhere. Now is the Time to Build a Strong Culture of Security.

Since Nightfall’s inception in 2018, we’ve made it our mission to equip companies with the tools that they need to encourage safe employee innovation. Today, we’re happy to announce that we’ve expanded Nightfall’s capabilities to protect sensitive data across generative AI (GenAI) tools and the cloud. Our latest product suite, Nightfall for GenAI, consists of three products: Nightfall for ChatGPT, Nightfall for SaaS, and Nightfall for LLMs.

Worried About Leaking Data to LLMs? Here's How Nightfall Can Help.

Since the widespread launch of GPT-3.5 in November of last year, we’ve seen a meteoric rise in generative AI (GenAI) tools, along with an onslaught of security concerns from both countries and companies around the globe. Tech leaders like Apple have warned employees against using ChatGPT and GitHub Copilot, while other major players like Samsung have even go so far as to completely ban GenAI tools. Why are companies taking such drastic measures to prevent data leaks to LLMs, you may ask?

Code Mirage: How cyber criminals harness AI-hallucinated code for malicious machinations

The landscape of cybercrime continues to evolve, and cybercriminals are constantly seeking new methods to compromise software projects and systems. In a disconcerting development, cybercriminals are now capitalizing on AI-generated unpublished package names also known as “AI-Hallucinated packages” to publish malicious packages under commonly hallucinated package names.

How Torq Socrates is Designed to Hyperautomate 90% of Tier-1 Analysis With Generative AI

Artificial intelligence (AI) has generated significant hype in recent years, and separating the promise from reality can be challenging. However, at Torq, AI is not just a concept. It is a reality that is revolutionizing the SOC field, specifically in the area of Tier-1 security analysis, especially as cybercriminals become more sophisticated in their tactics and techniques. Traditional security tools continue to fall short in detecting and mitigating these attacks effectively, particularly at scale.

Effective Access and Collaboration on Large Lab Datasets using Egnyte's Smart Cache

The life sciences industry is at the forefront of data-intensive research and innovation. Scientists and researchers rely heavily on the collection, processing, and analysis of vast amounts of data generated by lab instruments. And they are often challenged by errors or confusion in managing data flows that in turn, have a direct impact on the quality of data and corresponding compliance with regulatory requirements.

2 (Realistic) Ways to Leverage AI In Cybersecurity

If you had to choose a security measure that would make the most difference to your cyber program right now, what would it be? Maybe you’d like to get another person on your team? Someone who is a skilled analyst, happy to do routine work and incredibly reliable. Or perhaps you’d prefer an investment that would give your existing team members back more of their time without compromising your ability to find and fix threats? What about human intelligence without human limitations?