Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI in Security: Navigating Deep Fakes & Identity Theft

Dive into the future of security with Brivo as we explore the cutting-edge intersection of artificial intelligence (AI) and security threats. In this eye-opening video, Steve Van Till, our expert, unveils the emerging challenges of deep fakes, identity theft, and large-scale cyber crimes that AI brings to the forefront. Discover how Brivo's innovative solutions are paving the way for safer, smarter spaces in the face of these digital dangers. Don't miss out on expert insights and actionable tips to protect your digital identity and assets.

New Charlotte AI Innovations Enable Prompt Collaboration and Demystify Script Analysis

Since CrowdStrike Charlotte AI became generally available, we’ve seen firsthand how genAI can transform security operations, enabling teams to save hours across time-sensitive tasks and accelerate response to match the speed of modern adversaries.

The Double-Edged Sword of Artificial Intelligence (AI) in Cybersecurity

As artificial intelligence (AI) continues to advance, its impact on cybersecurity grows more significant. AI is an incredibly powerful tool in the hands of both cyber attackers and defenders, playing a pivotal role in the evolving landscape of digital threats and security defense mechanisms. In this blog, let’s explore the ways AI is employed by attackers to conduct cyber attacks, and how defenders are using AI to deter and counter threats.

AI's Role in Securing AEC Data: Paving the Path Forward

In the oft-obscure world of Architecture, Engineering, and Construction (AEC), the structures we see reaching for the skyline are not just feats of design and engineering but archives of data, each rivet and beam a data point in a colossal network of information. Yet, with these digital monoliths comes an invisible vulnerability – data control, a challenge that’s upending the AEC industry.

Nightfall's Firewall for AI

From customer service chatbots to enterprise search tools, it’s essential to protect your sensitive data while building or using AI. Enter: Nightfall’s Firewall for AI, which connects seamlessly via APIs and SDKs to detect sensitive data exposure in your AI apps and data pipelines. With Nightfall’s Firewall for AI, you can… … intercept prompts containing sensitive data before they’re sent to third-party LLMs or included in your training data.

Malicious Use of Generative AI Large Language Models Now Comes in Multiple Flavors

Analysis of malicious large language model (LLM) offerings on the dark web uncovers wide variation in service quality, methodology and value – with some being downright scams. We’ve seen the use of this technology grow to the point where an expansion of the cybercrime economy occurred to include GenAI-based services like FraudGPT and PoisonGPT, with many others joining their ranks.

What is Defensive AI and Why is it Essential in Bot Protection?

The definition of Artificial Intelligence (AI) has been thrown around whilst it has risen to the top of the tech agenda over the past couple of years. Security professionals have determined AI to be a risk to businesses, and also an opportunity. But could it also be a way to better defend your network against attacks? For many years, AI and Machine Learning have gone hand in hand; with AI used to better determine defensive decisions and cut down on the human element in more basic functions.

Responsible AI Licenses (RAIL): Here's What You Need to Know

Responsible AI Licenses (RAIL) are a class of licenses created with the intention of preventing harmful or unethical uses of artificial intelligence while also allowing for the free and open sharing of models between those who intend to use and improve them for authorized purposes. Anyone can make their own version of RAIL for their model, and in doing so can create more or less restrictions than those detailed in the template licenses.