Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

Security Flaws within ChatGPT Ecosystem Allowed Access to Accounts On Third-Party Websites and Sensitive Data

Salt Labs researchers identified generative AI ecosystems as a new interesting attack vector. vulnerabilities found during this research on ChatGPT ecosystem could have granted access to accounts of users, including GitHub repositories, including 0-click attacks.

AI-Driven Voice Cloning Tech Used in Vishing Campaigns

Scammers are using AI technology to assist in voice phishing (vishing) campaigns, the Better Business Bureau (BBB) warns. Generative AI tools can now be used to create convincing imitations of people’s voices based on very small audio samples. “At work, you get a voicemail from your boss,” the BBB says. “They instruct you to wire thousands of dollars to a vendor for a rush project. The request is out of the blue. But it’s the boss’s orders, so you make the transfer.

Five Principles for the Responsible Use, Adoption and Development of AI

We have been fantasising about artificial intelligence for a long time. This obsession materialises in some cultural masterpieces, with movies or books such as 2001: A Space Odyssey, Metropolis, Blade Runner, The Matrix, I, Robot, Westworld, and more. Most raise deep philosophical questions about human nature, but also explore the potential behaviours and ethics of artificial intelligence, usually through a rather pessimistic lens.

Chatbot security risks continue to proliferate

While the rise of ChatGPT and other AI chatbots has been hailed as a business game-changer, it is increasingly being seen as a critical security issue. Previously, we outlined the challenges created by ChatGPT and other forms of AI. In this blog post, we look at the growing threat from AI-associated cyber-attacks and discuss new guidance from the National Institute of Standards and Technology (NIST).

Generative AI Results In 1760% Increase in BEC Attacks

As cybercriminals leverage tools like generative AI, making attacks easier to execute and with a higher degree of success, phishing attacks continues to increase in frequency. I’ve been covering the cybercrime economy’s use of AI since it started. I’ve pointed out the simple misuse of ChatGPT when it launched, the creation of AI-based cybercrime platforms like FraudGPT, and how today’s cybercriminal can basically create foolproof malicious content.

How Much Will AI Help Cybercriminals?

Do not forget, AI-enabled technologies, like KnowBe4’s Artificial Intelligence Defense Agents (AIDA), will make defenses increasingly better. I get asked a lot to comment on AI, usually from people who wonder, or are even a bit scared, about how AI can be used to hurt them and others. It is certainly a top topic everyone in the cybersecurity world is wondering about. One of the key questions I get is: How much worse will AI make cybercrime? The quick answer is: No one knows.

Reduce insider risk with Nightfall Data Exfiltration Prevention

Nearly one third of all data breaches are caused by insiders. While you might immediately think of malicious insiders, like disgruntled or departing employees, insider risk can take numerous forms, including: From these examples alone, it’s easy to see just how prevalent insider risk really is. Whether it’s intentional or unintentional, insider risks often have the same consequences as external risks, including data leaks, data loss, noncompliance, and more.

Combining External Attack Surface Management and Crowdsourced Security Testing - Webinar Recap

Bugcrowd offers crowdsourced security testing through a community of white hat hackers. CyCognito offers automated discovery of an organization’s externally exposed attack surface. Combined, the two solutions allow for a comprehensive inventory of exposed assets to be included in the scope of bug bounties or pentests.

Nightfall AI Transforms Enterprise DLP with AI-Native Platform

Nightfall AI today unveiled new capabilities to transform data security for the modern enterprise. The industry's first generative AI (GenAI) DLP platform now offers coverage for SaaS Security Posture Management (SSPM), data encryption, data exfiltration prevention and sensitive data protection. These products expand the company's existing suite of data leak prevention (DLP) solutions for protecting data at rest and in use across SaaS applications, GenAI tools, email and endpoints.