Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Machine Learning

Top tips: Watch out for these 4 machine learning risks

Top tips is a weekly column where we highlight what’s trending in the tech world today and list ways to explore these trends. This week, we’re looking at four machine learning-related risks to watch out for. Machine learning (ML) is truly mind-blowing tech. The very fact that we’ve been able to develop AI models that are capable of learning and improving over time is remarkable.

What You Need to Know About Hugging Face

The risk both to and from AI models is a topic so hot it’s left the confines of security conferences and now dominates the headlines of major news sites. Indeed, the deluge of frightening hypotheticals can make AI feel like we are navigating an entirely new frontier with no compass. And to be sure, AI poses a lot of unique challenges to security, but remember: Both the media and AI companies have a vested interest in upping the fright hype to keep people talking.

Elevating Security Intelligence with Splunk UBA's Machine Learning Models

One of the most challenging aspects of running an effective Security Operations Center (SOC) is how to account for the high volume of notable events that actually do not present a risk to business. These events often include common occurrences like users forgetting their passwords a ridiculous number of times or accessing systems at odd hours for valid reasons. Despite their benign nature, struggling to handle the volume of such potential threats may often overwhelm limited staff.

ALPHV Blackcat, GCP-Native Attacks, Bandook RAT, NoaBot Miner, Ivanti Secure Vulnerabilities, and More: Hacker's Playbook Threat Coverage Round-up: February 2024

In this version of the Hacker’s Playbook Threat Coverage round-up, we are highlighting attack coverage for newly discovered or analyzed threats, including those based on original research conducted by SafeBreach Labs. SafeBreach customers can select and run these attacks and more from the SafeBreach Hacker’s Playbook™ to ensure coverage against these advanced threats.

Data Scientists Targeted by Malicious Hugging Face ML Models with Silent Backdoor

In the realm of AI collaboration, Hugging Face reigns supreme. But could it be the target of model-based attacks? Recent JFrog findings suggest a concerning possibility, prompting a closer look at the platform’s security and signaling a new era of caution in AI research. The discussion on AI Machine Language (ML) models security is still not widespread enough, and this blog post aims to broaden the conversation around the topic.

The DevSecOps Hangout

Curious to see what all the AI/ML hype is about? Watch our DevSecOps Hangout and hear how ML Model management benefits organizations by providing a single place to manage ALL software binaries, bringing DevOps best practices to ML development, and allowing organizations to ensure the integrity and security of ML models – all while leveraging an existing solution they already have in place. Watch our expert educational talks and panel discussion with our Technology Partner Qwak on MLOps, DevSecOps, AI, and Machine Learning.

Future of VPNs in Network Security for Workers

The landscape of network security is continuously evolving, and Virtual Private Networks (VPNs) are at the forefront of this change, especially in the context of worker security. As remote work becomes more prevalent and cyber threats more sophisticated, the role of VPNs in ensuring secure and private online activities for workers is more crucial than ever. Let's explore the anticipated advancements and trends in VPN technology that could redefine network security for workers.