Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

Social Engineering Attacks Utilizing Generative AI Increase by 135%

New insights from cybersecurity artificial intelligence (AI) company Darktrace shows a 135% increase in novel social engineering attacks from Generative AI. This new form of social engineering that contributed to this increase is much more sophisticated in nature, using linguistics techniques with increased text volume, punctuation, and sentence length to trick their victim. We've recently covered ChatGPT scams and other various AI scams, but this attack proves to be very different.

ChatGPT: The Cyber Risk vs. Reward

There has been a lot of talk about ChatGPT since it burst onto the market several months ago. And despite its infancy and the lack of standardized regulations around intelligent automation — the OpenAI tool has exploded into the tech ecosystems of businesses everywhere. While many see significant benefits from its use, few discuss the cyber risk to the industry and our organizations.

Fake ChatGPT Scam Turns into a Fraudulent Money-Making Scheme

Using the lure of ChatGPT’s AI as a means to find new ways to make money, scammers trick victims using a phishing-turned-vishing attack that eventually takes victim’s money. It’s probably safe to guess that anyone reading this article has either played with ChatGPT directly or has seen examples of its use on social media. The idea of being able to ask simple questions and get world-class expert answers in just about any area of knowledge is staggering.

The New Face of Fraud: FTC Sheds Light on AI-Enhanced Family Emergency Scams

The Federal Trade Commission is alerting consumers about a next-level, more sophisticated family emergency scam that uses AI that imitates the voice of a "family member in distress". They started out with: "You get a call. There's a panicked voice on the line. It's your grandson. He says he's in deep trouble — he wrecked the car and landed in jail. But you can help by sending money. You take a deep breath and think. You've heard about grandparent scams. But darn, it sounds just like him.

What Generative AI Means For Cybersecurity: Risk & Reward

In recent years, generative artificial intelligence (AI), especially Large Language Models (LLMs) like ChatGPT, has revolutionized the fields of AI and natural language processing. From automating customer support to creating realistic chatbots, we rely on AI much more than many of us probably realize. The AI hype train definitely reached full steam in the last several months, especially for cybersecurity use cases, with the release of tools such as.

Artificial Intelligence Makes Phishing Text More Plausible

Cybersecurity experts continue to warn that advanced chatbots like ChatGPT are making it easier for cybercriminals to craft phishing emails with pristine spelling and grammar, the Guardian reports. Corey Thomas, CEO of Rapid7, stated, “Every hacker can now use AI that deals with all misspellings and poor grammar. The idea that you can rely on looking for bad grammar or spelling in order to spot a phishing attack is no longer the case.

ChatGPT Suffered From a Major Data Breach Exposing its Subscribers

ChatGPT is OpenAi's chatbot designed to simulate conversations with other people. The tool utilizes a massive language model to produce realistic and believable responses for a conversation. OpenAI offers a subscription service known as ChatGPT Plus that offers preferential access to the powerful AI system for subscribers. Some of these subscribers were exposed in the first-ever ChatGPT data breach that occurred in March this year.

5 cyber threats that criminals can generate with the help of ChatGPT

ChatGPT, the public generative AI that came out in late November 2022, has raised legitimate concerns about its potential to amplify the severity and complexity of cyberthreats. In fact, as soon as OpenAI announced its release many security experts predicted that it would only be a matter of time before attackers started using this AI chatbot to craft malware or even augment phishing attacks.

How Can AI Predict Cybersecurity Incidents?

As technology becomes more prevalent in our lives, the risk of cybersecurity incidents is also increasing. Cybersecurity incidents can cause significant damage to organizations, including financial loss, reputational damage, and theft of sensitive data. Therefore, it is essential to have a robust cybersecurity system in place to protect against cyber-attacks. Artificial intelligence (AI) is one technology that can be used to predict cybersecurity incidents and mitigate their associated risks.

CrowdStrike's Artificial Intelligence Tooling Uses Similarity Search to Analyze Script-Based Malware Attack Techniques

According to the AV-TEST Institute, more than 1 billion strains of malware have been created, and more than 500,00 new pieces of malware are detected every day. One of the main reasons for this rapid growth is that malware creators frequently reuse source code. They modify existing malware to meet the specific objectives of an attack campaign or to avoid signature-based detection.