Using the lure of ChatGPT’s AI as a means to find new ways to make money, scammers trick victims using a phishing-turned-vishing attack that eventually takes victim’s money. It’s probably safe to guess that anyone reading this article has either played with ChatGPT directly or has seen examples of its use on social media. The idea of being able to ask simple questions and get world-class expert answers in just about any area of knowledge is staggering.
The Federal Trade Commission is alerting consumers about a next-level, more sophisticated family emergency scam that uses AI that imitates the voice of a "family member in distress". They started out with: "You get a call. There's a panicked voice on the line. It's your grandson. He says he's in deep trouble — he wrecked the car and landed in jail. But you can help by sending money. You take a deep breath and think. You've heard about grandparent scams. But darn, it sounds just like him.
Cybersecurity experts continue to warn that advanced chatbots like ChatGPT are making it easier for cybercriminals to craft phishing emails with pristine spelling and grammar, the Guardian reports. Corey Thomas, CEO of Rapid7, stated, “Every hacker can now use AI that deals with all misspellings and poor grammar. The idea that you can rely on looking for bad grammar or spelling in order to spot a phishing attack is no longer the case.
ChatGPT is OpenAi's chatbot designed to simulate conversations with other people. The tool utilizes a massive language model to produce realistic and believable responses for a conversation. OpenAI offers a subscription service known as ChatGPT Plus that offers preferential access to the powerful AI system for subscribers. Some of these subscribers were exposed in the first-ever ChatGPT data breach that occurred in March this year.
ChatGPT, the public generative AI that came out in late November 2022, has raised legitimate concerns about its potential to amplify the severity and complexity of cyberthreats. In fact, as soon as OpenAI announced its release many security experts predicted that it would only be a matter of time before attackers started using this AI chatbot to craft malware or even augment phishing attacks.
As technology becomes more prevalent in our lives, the risk of cybersecurity incidents is also increasing. Cybersecurity incidents can cause significant damage to organizations, including financial loss, reputational damage, and theft of sensitive data. Therefore, it is essential to have a robust cybersecurity system in place to protect against cyber-attacks. Artificial intelligence (AI) is one technology that can be used to predict cybersecurity incidents and mitigate their associated risks.
According to the AV-TEST Institute, more than 1 billion strains of malware have been created, and more than 500,00 new pieces of malware are detected every day. One of the main reasons for this rapid growth is that malware creators frequently reuse source code. They modify existing malware to meet the specific objectives of an attack campaign or to avoid signature-based detection.
In late 2022, artificial intelligence (AI) chat and conversational bots garnered large followings and user bases. AI chatbots, including ChatGPT, Meta’s Blender Bot3 and DeepMind, and Google’s Sparrow, have numerous benefits and uses, including potentially replacing current search engines, but there are notable drawbacks.