Cybersecurity experts continue to warn that advanced chatbots like ChatGPT are making it easier for cybercriminals to craft phishing emails with pristine spelling and grammar, the Guardian reports. Corey Thomas, CEO of Rapid7, stated, “Every hacker can now use AI that deals with all misspellings and poor grammar. The idea that you can rely on looking for bad grammar or spelling in order to spot a phishing attack is no longer the case.
ChatGPT is OpenAi's chatbot designed to simulate conversations with other people. The tool utilizes a massive language model to produce realistic and believable responses for a conversation. OpenAI offers a subscription service known as ChatGPT Plus that offers preferential access to the powerful AI system for subscribers. Some of these subscribers were exposed in the first-ever ChatGPT data breach that occurred in March this year.
ChatGPT, the public generative AI that came out in late November 2022, has raised legitimate concerns about its potential to amplify the severity and complexity of cyberthreats. In fact, as soon as OpenAI announced its release many security experts predicted that it would only be a matter of time before attackers started using this AI chatbot to craft malware or even augment phishing attacks.
As technology becomes more prevalent in our lives, the risk of cybersecurity incidents is also increasing. Cybersecurity incidents can cause significant damage to organizations, including financial loss, reputational damage, and theft of sensitive data. Therefore, it is essential to have a robust cybersecurity system in place to protect against cyber-attacks. Artificial intelligence (AI) is one technology that can be used to predict cybersecurity incidents and mitigate their associated risks.
According to the AV-TEST Institute, more than 1 billion strains of malware have been created, and more than 500,00 new pieces of malware are detected every day. One of the main reasons for this rapid growth is that malware creators frequently reuse source code. They modify existing malware to meet the specific objectives of an attack campaign or to avoid signature-based detection.
In late 2022, artificial intelligence (AI) chat and conversational bots garnered large followings and user bases. AI chatbots, including ChatGPT, Meta’s Blender Bot3 and DeepMind, and Google’s Sparrow, have numerous benefits and uses, including potentially replacing current search engines, but there are notable drawbacks.
Since it was first released to the public late last year, ChatGPT has successfully captured the attention of many. OpenAI’s large language model chatbot is intriguing for a variety of reasons, not the least of which is the manner in which it responds to human users. ChatGPT’s language usage resembles that of an experienced professional. But while its responses are delivered with unshakeable confidence, its content is not always as impressive.
Some of the world’s largest tech companies, like Google and Microsoft, have embedded AI into their business productivity suites, with Microsoft going a step further and releasing AI Copilot for Power Apps, its low-code platform. This integration has raised concerns over the decision-making power granted to business users to integrate data with AI and grant access, which can be done without oversight or control from IT.