Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

Understanding AI Package Hallucination: The latest dependency security threat

In this video, we explore AI package Hallucination. This threat is a result of AI generation tools hallucinating open-source packages or libraries that don't exist. In this video, we explore why this happens and show a demo of ChatGPT creating multiple packages that don't exist. We also explain why this is a prominent threat and how malicious hackers could harness this new vulnerability for evil. It is the next evolution of Typo Squatting.

An investigation into code injection vulnerabilities caused by generative AI

Generative AI is an exciting technology that is now easily available through cloud APIs provided by companies such as Google and OpenAI. While it’s a powerful tool, the use of generative AI within code opens up additional security considerations that developers must take into account to ensure that their applications remain secure. In this article, we look at the potential security implications of large language models (LLMs), a text-producing form of generative AI.

Casting a Cybersecurity Net to Secure Generative AI in Manufacturing

Generative AI has exploded in popularity across many industries. While this technology has many benefits, it also raises some unique cybersecurity concerns. Securing AI must be a top priority for organizations as they rush to implement these tools. The use of generative AI in manufacturing poses particular challenges. Over one-third of manufacturers plan to invest in this technology, making it the industry's fourth most common strategic business change.

How AI will impact cybersecurity: the beginning of fifth-gen SIEM

The power of artificial intelligence (AI) and machine learning (ML) is a double-edged sword — empowering cybercriminals and cybersecurity professionals alike. AI, particularly generative AI’s ability to automate tasks, extract information from vast amounts of data, and generate communications and media indistinguishable from the real thing, can all be used to enhance cyberattacks and campaigns.

The NIST AI Risk Management Framework: Building Trust in AI

The NIST Artificial Intelligence Risk Management Framework (AI RMF) is a recent framework developed by The National Institute of Standards and Technology (NIST) to guide organizations across all sectors in the use of artificial intelligence (AI) and its systems. As AI continues to become implemented in nearly every sector — from healthcare to finance to national defense — it also brings new risks and concerns with it.

Nightfall AI: The First AI-Native Enterprise DLP Platform

Legacy DLP solutions never worked. They're point solutions that generate an overwhelming number of false positive alerts, and block the business in the process. But no longer. Enter: Nightfall AI, the first AI-native enterprise DLP platform that protects sensitive data across SaaS, generative AI (GenAI), email, and endpoints, all from the convenience of a unified console.

Best LLM Security Tools of 2024: Safeguarding Your Large Language Models

As large language models (LLMs) continue to push the boundaries of natural language processing, their widespread adoption across various industries has highlighted the critical need for robust security measures. These powerful AI systems, while immensely beneficial, are not immune to potential risks and vulnerabilities. In 2024, the landscape of LLM security tools has evolved to address the unique challenges posed by these advanced models, ensuring their safe and responsible deployment.

Elastic Security | AI Assistant Demo

Elastic AI Assistant can provide real-time, personalized alert insights — empowering security teams to stay one step ahead in the ever-evolving threat landscape. With the power of large language models (LLMs), the AI Assistant can process multiple alerts simultaneously, offering an unprecedented level of insight and customization. You can interact with your data by asking complex questions and receiving context-aware responses tailored to your needs. Watch this demo from James Spiteri, Director of Product Management at Elastic to see what's new in the Elastic AI Assistant in Elastic Security 8.12.

The Security Risks of Microsoft Bing AI Chat at this Time

AI has long since been an intriguing topic for every tech-savvy person, and the concept of AI chatbots is not entirely new. In 2023, AI chatbots will be all the world can talk about, especially after the release of ChatGPT by OpenAI. Still, there was a past when AI chatbots, specifically Bing’s AI chatbot, Sydney, managed to wreak havoc over the internet and had to be forcefully shut down.

Unlocking Insights with AI: Introducing Data Explorer by Brivo

Welcome to the future of data analysis! 🌟 In this video, we're diving deep into Brivo's latest innovation - the Data Explorer, an AI-powered tool designed to revolutionize the way we approach data analysis. With the power of artificial intelligence, Data Explorer simplifies complex data sets, allowing you to uncover insights with minimal effort. 🧠💡