Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Latest News

Generative AI Results In 1760% Increase in BEC Attacks

As cybercriminals leverage tools like generative AI, making attacks easier to execute and with a higher degree of success, phishing attacks continues to increase in frequency. I’ve been covering the cybercrime economy’s use of AI since it started. I’ve pointed out the simple misuse of ChatGPT when it launched, the creation of AI-based cybercrime platforms like FraudGPT, and how today’s cybercriminal can basically create foolproof malicious content.

How Much Will AI Help Cybercriminals?

Do not forget, AI-enabled technologies, like KnowBe4’s Artificial Intelligence Defense Agents (AIDA), will make defenses increasingly better. I get asked a lot to comment on AI, usually from people who wonder, or are even a bit scared, about how AI can be used to hurt them and others. It is certainly a top topic everyone in the cybersecurity world is wondering about. One of the key questions I get is: How much worse will AI make cybercrime? The quick answer is: No one knows.

Reduce insider risk with Nightfall Data Exfiltration Prevention

Nearly one third of all data breaches are caused by insiders. While you might immediately think of malicious insiders, like disgruntled or departing employees, insider risk can take numerous forms, including: From these examples alone, it’s easy to see just how prevalent insider risk really is. Whether it’s intentional or unintentional, insider risks often have the same consequences as external risks, including data leaks, data loss, noncompliance, and more.

Combining External Attack Surface Management and Crowdsourced Security Testing - Webinar Recap

Bugcrowd offers crowdsourced security testing through a community of white hat hackers. CyCognito offers automated discovery of an organization’s externally exposed attack surface. Combined, the two solutions allow for a comprehensive inventory of exposed assets to be included in the scope of bug bounties or pentests.

Nightfall AI Transforms Enterprise DLP with AI-Native Platform

Nightfall AI today unveiled new capabilities to transform data security for the modern enterprise. The industry's first generative AI (GenAI) DLP platform now offers coverage for SaaS Security Posture Management (SSPM), data encryption, data exfiltration prevention and sensitive data protection. These products expand the company's existing suite of data leak prevention (DLP) solutions for protecting data at rest and in use across SaaS applications, GenAI tools, email and endpoints.

Cybersecurity in the Age of AI: Exploring AI-Generated Cyber Attacks

Historically, cyber-attacks were labor-intensive, meticulously planned, and needed extensive manual research. However, with the advent of AI, threat actors have harnessed their capabilities to orchestrate attacks with exceptional efficiency and potency. This technological shift enables them to execute more sophisticated, harder-to-detect attacks at scale.

OWASP Top 10 for LLM Applications - Critical Vulnerabilities and Risk Mitigation

GPT’s debut created a buzz, democratizing AI beyond tech circles. While its language expertise offers practical applications, security threats like malware and data leaks pose challenges. Organizations must carefully assess and balance the benefits against these security risks. Ensuring your safety while maximizing the benefits of Large Language Models(LLMs) like ChatGPT involves implementing practical actions and preparing for current and future security challenges.

Nightfall expands its platform to meet modern enterprise DLP challenges

Legacy data leak prevention (DLP) solutions are failing. Simply put, they weren’t built for business environments rooted in SaaS apps and generative AI (GenAI) tools. Meanwhile, security threats are evolving at a breakneck pace, with as many as 95% of enterprises experiencing multiple breaches a year. New attack surfaces are unfurling at a rapid rate following the switch to hybrid and cloud-based workspaces.

The rise of ChatGPT & GenAI and what it means for cybersecurity

The rise of ChatGPT and Generative AI has swept the world by storm. It has left no stone unturned and has strong implications for cybersecurity and SecOps. The big reason for this is that cybercriminals now use GenAI to increase the potency and frequency of their attacks on organizations. To cope with this, security teams naturally need to adapt and are looking for ways to leverage AI to counter these attacks in a similar fashion.

Securing AI

With the proliferation of AI/ML enabled technologies to deliver business value, the need to protect data privacy and secure AI/ML applications from security risks is paramount. An AI governance framework model like the NIST AI RMF to enable business innovation and manage risk is just as important as adopting guidelines to secure AI. Responsible AI starts with securing AI by design and securing AI with Zero Trust architecture principles.