Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

Forward Networks Delivers First Generative AI Powered Feature

Natural language prompts put the power of NQE into the hands of every networking engineer As featured in Network World, Forward Networks has raised the bar for network digital twin technology with AI Assist. This groundbreaking addition empowers NetOps, SecOps, and CloudOps professionals to harness the comprehensive insights of NQE through natural language prompts to quickly resolve complex network issues. See the feature in action.

Future of VPNs in Network Security for Workers

The landscape of network security is continuously evolving, and Virtual Private Networks (VPNs) are at the forefront of this change, especially in the context of worker security. As remote work becomes more prevalent and cyber threats more sophisticated, the role of VPNs in ensuring secure and private online activities for workers is more crucial than ever. Let's explore the anticipated advancements and trends in VPN technology that could redefine network security for workers.

Four Takeaways from the McKinsey AI Report

Artificial intelligence (AI) has been a hot topic of discussion this year among tech and cybersecurity professionals and the wider public. With the recent advent and rapid advancement of a number of publicly available generative AI tools—ChatGPT, Dall-E, and others—the subject of AI is at the top of many minds. Organizations and individuals alike have adopted these tools for a wide range of business and personal functions.

Use of Generative AI Apps Jumps 400% in 2023, Signaling the Potential for More AI-Themed Attacks

As the use of Cloud SaaS platforms of generative AI solutions increases, the likelihood of more “GPT” attacks used to gather credentials, payment info and corporate data also increases. In Netskope’s Cloud and Threat Report 2024, they show a massive growth in the use of generative AI solutions – from just above 2% of enterprise users prior to 2023 to over 10% in November of last year. Mainstream AI services ChatGPT, Grammarly, and Google Bard all top the list of those used.

How Cloudflare's AI WAF proactively detected the Ivanti Connect Secure critical zero-day vulnerability

Most WAF providers rely on reactive methods, responding to vulnerabilities after they have been discovered and exploited. However, we believe in proactively addressing potential risks, and using AI to achieve this. Today we are sharing a recent example of a critical vulnerability (CVE-2023-46805 and CVE-2024-21887) and how Cloudflare's Attack Score powered by AI, and Emergency Rules in the WAF have countered this threat.

Cato Taps Generative AI to Improve Threat Communication

Today, Cato is furthering our goal of simplifying security operations with two important additions to Cato SASE Cloud. First, we’re leveraging generative AI to summarize all the indicators related to a security issue. Second, we tapped ML to accelerate the identification and ranking of threats by finding similar past threats across an individual customer’s account and all Cato accounts.

Making Sense of AI in Cybersecurity

Unless you have been living under a rock, you have seen, heard, and interacted with Generative AI in the workplace. To boot, nearly every company is saying something to the effect of “our AI platform can help achieve better results, faster,” making it very confusing to know who is for real and who is simply riding the massive tidal wave that is Generative AI.

Fake Biden Robocall Demonstrates the Need for Artificial Intelligence Governance Regulation

The proliferation of artificial intelligence tools worldwide has generated concern among governments, organizations, and privacy advocates over the general lack of regulations or guidelines designed to protect against misusing or overusing this new technology.

AI Does Not Scare Me, But It Will Make The Problem Of Social Engineering Much Worse

I am not scared of AI. What I mean is that I do not think AI is going to kill humanity Terminator-style. I think AI is going to be responsible for more cybercrime and more realistic phishing messages, but it is already pretty bad. Social engineering, without AI, is already involved in 70% - 90% of successful cyber attacks.