Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

April 2024

Accelerate application code fixes with AI-powered Polaris Assist

We're excited to announce the availability of Polaris Assist, our AI-powered application security assistant that combines decades of real-world insights with a powerful large language model (LLM). Polaris Assist gives security and development teams easy-to-understand summaries of detected vulnerabilities and code fix recommendations to help them build secure software faster.

Fuel for Security AI

The big idea behind Corelight has always been simple: ground truth is priceless. What really happened, both now and looking back in time. Whether it is used to detect attacks, investigate routine alerts, respond to new vulnerabilities or a full scale incident response, the constant is that ground truth makes everything in security better. We have no claim of authorship here. By contrast, we learn from the world’s most accomplished defenders through their use of Zeek® and Suricata®.

Accelerating AI Adoption: AI Workload Security for CNAPP

When it comes to securing applications in the cloud, adaptation is not just a strategy but a necessity. We’re currently experiencing a monumental shift driven by the mass adoption of AI, fundamentally changing the way companies operate. From optimizing efficiency through automation to transforming the customer experience with speed and personalization, AI has empowered developers with exciting new capabilities.

TrustCloud Product Updates: April 2024

You know us: Every month we’re cooking up something new! Here are the updates that hit TrustCloud this month. TrustShare GraphAI will answer questionnaires for you with accurate, high-quality responses. TrustShare is getting a huge AI glow up. GraphAI’s generative AI capabilities will now fill in answers that are more context aware, more natural, and more accurate than ever before.

Enhancing Cybersecurity with BlueVoyant's AI Technology for Emerging Vulnerabilities

After a new zero-day vulnerability is announced, the National Vulnerability Database (NVD) publishes a measure of its severity under the Common Vulnerability Scoring System (CVSS). CVSS scores are a crucial tool for organizations as they give an approximation of the severity of disclosed vulnerabilities.

Elastic Security evolves into the first and only AI-driven security analytics solution

In our previous installation, we discussed the history of security information and event management (SIEM) solutions — from collection to organizational detections and finally to response and orchestration. Now, we are firmly in the SIEM 3.0 revolution and focused on applying generative AI to every applicable process in the security operations center with tremendous success.

Navigating Security Concerns: Microsoft Copilot's Integration with Microsoft 365

There are so many exciting things happening in the AI space currently. One of them is the integration of Microsoft Copilot, a generative AI, with Microsoft 365 applications. This fusion brings Copilot’s capabilities into the suite’s comprehensive office productivity tools to transform daily workloads and enhance productivity efficiency through the automation of mundane tasks, alongside offering insights and analyzing data. Key features include.

AI-Assisted Phishing Attacks Are on the Rise

Threat actors are increasingly using generative AI tools to improve their phishing campaigns, according to a new report from Zscaler. “AI represents a paradigm shift in the realm of cybercrime, particularly for phishing scams,” the researchers write. “With the aid of generative AI, cybercriminals can rapidly construct highly convincing phishing campaigns that surpass previous benchmarks of complexity and effectiveness.

AI-driven cyber attacks to be the norm within a year, say security leaders

New research from Netacea reveals 93% of security leaders expect to face daily AI-driven attacks by the end of this year. Ransomware and phishing attacks are expected to be enhanced by offensive AI, but bots remain an underestimated threat. All respondents are benefiting from AI in their security stack, but adoption of bot management is lagging behind.

AI Revolution in Access Control: Transforming Security with Brivo

Dive into the future of security with Brivo as we explore the transformative power of Artificial Intelligence in access control systems. 🔒💡 In this video, Steve Van Till, a pioneer in smart spaces technology, unveils how AI equips us with advanced toolsets, enabling more efficient and effective security solutions. Discover how Brivo is leading the charge in integrating AI into commercial real estate, multifamily residential, and large enterprises, ensuring unparalleled security automation. 🤖🔑

Safeguarding Your LLM-Powered Applications: A Comprehensive Approach

The rapid advancements in large language models (LLMs) have revolutionized the manner in which we interact with technology. These powerful AI systems have found their way into a wide range of applications, from conversational assistants and content generation tools to more complex decision-making systems. As the adoption of LLM-powered applications continues to grow, it has become increasingly crucial to prioritize the security and safety of these technologies.

Snyk Code's autofixing feature, DeepCode AI Fix, just got better

DeepCode AI Fix is an AI-powered feature that provides one-click, security-checked fixes within Snyk Code, a developer-focused, real-time SAST tool. Amongst the first semi-automated, in-IDE security fix features on the market, DeepCode AI Fix’s public beta was announced as part of Snyk Launch in April 2023. It delivered fixes to security issues detected by Snyk Code in real-time, in-line, and within the IDE.

Enhancing Developer Efficiency With AI-Powered Remediation

Traditional methods of flaw remediation are not equipped with the technology to keep pace with the rapid evolution of code generation practices, leaving developers incapable of managing burdensome and overwhelming security debt. Code security is still a critical concern in software development. For instance, when GitHub Copilot generated 435 code snippets, almost 36% of them had security weaknesses, regardless of the programming language.

Unlocking the Future: Brivo's AI-Driven Security Solutions

Dive into the world of advanced security with Brivo! In this video, we explore how Brivo, the pioneer in cloud-based access control and smart spaces technology, is revolutionizing security solutions. With over two decades of innovation, Brivo's open platform allows businesses to seamlessly integrate AI features for unparalleled access control and more. 🏠🔑

What is the Use of LLMs in Generative AI?

Generative AI is a rapidly maturing field that has captured the imagination of researchers, developers, and industries alike. Generative AI refers to artificial intelligence systems adept at concocting new and original content, such as text, images, audio, or code, based on the patterns and relationships learned from training data. This revolutionary technology can transform various sectors, from creative industries to scientific research and product development.

Revolutionizing Daily Tech: AI's Role in Our Everyday Lives

Dive into the fascinating world of how artificial intelligence is seamlessly woven into the fabric of our daily technology, transforming the mundane into the extraordinary. At Brivo, we've been at the forefront of integrating generative AI into cloud-based solutions, redefining what's possible in commercial real estate, multifamily residential, and large distributed enterprises. Join us as we explore the endless possibilities that AI brings to everyday technology, making our lives more secure, efficient, and connected.

Microsoft Copilot for Security - Use Cases for Data Governance Teams Working with Auditors and Consultants

This is the final installment of our Microsoft Copilot for Security blog series. Over the past eight weeks, our weekly blog helped various cyber security groups see possible use cases for Microsoft Copilot for Security. This final blog explores how AI and Microsoft Copilot for Security can assist external auditors and consultants in interacting with Microsoft Purview. Azure Policy and Microsoft Purview work together to ensure the proper governance and compliance of data assets.

Generative AI and Cyber Security

There has been a lot of talk about Artificial Intelligence (AI) in recent years. It is certainly a polarizing subject. While it raises hopes about the future of technology and what humanity is capable of, it also raises questions around human control and technological determination. There are those who worry that Artificial Intelligence is going to ‘take people’s jobs’, or even take over the world, and that the world will end up like a dystopian ‘Terminator’ style film.

Empowering Customers & Partners: Unveiling the Transformative Impact of Brivo AI

Explore the future of security and smart technology with Brivo. Our content delves into innovative solutions that empower businesses and individuals to create safer, more connected environments. Don't forget to like, share, and subscribe to stay updated on the latest trends in access control and smart space management. Connect with us for a smarter, more secure tomorrow.

AI Voice Cloning and Bank Voice Authentication: A Recipe for Disaster?

New advancements in generative AI voice cloning come at a time when banks are looking for additional ways to authenticate their customers – and they’re choosing your voice. Banks adopted the principles of multi-factor authentication years ago. But continued cyber attacks aimed at providing SIM swapping services have increased the risk of assuming the credential owner actually possesses the mobile device. So, where do they go next to prove you’re you? Voiceprint.

Navigating AI and Cybersecurity: Insights from the World Economic Forum (WEF)

Cybersecurity has always been a complex field. Its adversarial nature means the margins between failure and success are much finer than in other sectors. As technology evolves, those margins get even finer, with attackers and defenders scrambling to exploit them and gain a competitive edge. This is especially true for AI.

EP 50 - Adversarial AI's Advance

In the 50th episode of the Trust Issues podcast, host David Puner interviews Justin Hutchens, an innovation principal at Trace3 and co-host of the Cyber Cognition podcast (along with CyberArk’s resident Technical Evangelist, White Hat Hacker and Transhuman Len Noe). They discuss the emergence and potential misuse of generative AI, especially natural language processing, for social engineering and adversarial hacking.

Speed vs Security: Striking the Right Balance in Software Development with AI

Software development teams face a constant dilemma: striking the right balance between speed and security. How is artificial intelligence (AI) impacting this dilemma? With the increasing use of AI in the development process, it's essential to understand the risks involved and how we can maintain a secure environment without compromising on speed. Let’s dive in.

Protecto - AI Regulations and Governance Monthly Update - March 2024

In a landmark development, the U.S. Department of Homeland Security (DHS) has unveiled its pioneering Artificial Intelligence Roadmap, marking a significant stride towards incorporating generative AI models into federal agencies' operations. Under the leadership of Secretary Alejandro N. Mayorkas and Chief Information Officer Eric Hysen, DHS aims to harness AI technologies to bolster national security while safeguarding individual privacy and civil liberties.

Understanding AI Package Hallucination: The latest dependency security threat

In this video, we explore AI package Hallucination. This threat is a result of AI generation tools hallucinating open-source packages or libraries that don't exist. In this video, we explore why this happens and show a demo of ChatGPT creating multiple packages that don't exist. We also explain why this is a prominent threat and how malicious hackers could harness this new vulnerability for evil. It is the next evolution of Typo Squatting.

An investigation into code injection vulnerabilities caused by generative AI

Generative AI is an exciting technology that is now easily available through cloud APIs provided by companies such as Google and OpenAI. While it’s a powerful tool, the use of generative AI within code opens up additional security considerations that developers must take into account to ensure that their applications remain secure. In this article, we look at the potential security implications of large language models (LLMs), a text-producing form of generative AI.

Casting a Cybersecurity Net to Secure Generative AI in Manufacturing

Generative AI has exploded in popularity across many industries. While this technology has many benefits, it also raises some unique cybersecurity concerns. Securing AI must be a top priority for organizations as they rush to implement these tools. The use of generative AI in manufacturing poses particular challenges. Over one-third of manufacturers plan to invest in this technology, making it the industry's fourth most common strategic business change.

How AI will impact cybersecurity: the beginning of fifth-gen SIEM

The power of artificial intelligence (AI) and machine learning (ML) is a double-edged sword — empowering cybercriminals and cybersecurity professionals alike. AI, particularly generative AI’s ability to automate tasks, extract information from vast amounts of data, and generate communications and media indistinguishable from the real thing, can all be used to enhance cyberattacks and campaigns.

The NIST AI Risk Management Framework: Building Trust in AI

The NIST Artificial Intelligence Risk Management Framework (AI RMF) is a recent framework developed by The National Institute of Standards and Technology (NIST) to guide organizations across all sectors in the use of artificial intelligence (AI) and its systems. As AI continues to become implemented in nearly every sector — from healthcare to finance to national defense — it also brings new risks and concerns with it.

Nightfall AI: The First AI-Native Enterprise DLP Platform

Legacy DLP solutions never worked. They're point solutions that generate an overwhelming number of false positive alerts, and block the business in the process. But no longer. Enter: Nightfall AI, the first AI-native enterprise DLP platform that protects sensitive data across SaaS, generative AI (GenAI), email, and endpoints, all from the convenience of a unified console.

Best LLM Security Tools of 2024: Safeguarding Your Large Language Models

As large language models (LLMs) continue to push the boundaries of natural language processing, their widespread adoption across various industries has highlighted the critical need for robust security measures. These powerful AI systems, while immensely beneficial, are not immune to potential risks and vulnerabilities. In 2024, the landscape of LLM security tools has evolved to address the unique challenges posed by these advanced models, ensuring their safe and responsible deployment.

Elastic Security | AI Assistant Demo

Elastic AI Assistant can provide real-time, personalized alert insights — empowering security teams to stay one step ahead in the ever-evolving threat landscape. With the power of large language models (LLMs), the AI Assistant can process multiple alerts simultaneously, offering an unprecedented level of insight and customization. You can interact with your data by asking complex questions and receiving context-aware responses tailored to your needs. Watch this demo from James Spiteri, Director of Product Management at Elastic to see what's new in the Elastic AI Assistant in Elastic Security 8.12.

OWASP Top 10 for LLM Applications: A Quick Guide

Published in 2023, the OWASP Top 10 for LLM Applications is a monumental effort made possible by a large number of experts in the fields of AI, cybersecurity, cloud technology, and beyond. OWASP contributors came up with over 40 distinct threats and then voted and refined their list down to the ten most important vulnerabilities.

The Security Risks of Microsoft Bing AI Chat at this Time

AI has long since been an intriguing topic for every tech-savvy person, and the concept of AI chatbots is not entirely new. In 2023, AI chatbots will be all the world can talk about, especially after the release of ChatGPT by OpenAI. Still, there was a past when AI chatbots, specifically Bing’s AI chatbot, Sydney, managed to wreak havoc over the internet and had to be forcefully shut down.

Unlocking Insights with AI: Introducing Data Explorer by Brivo

Welcome to the future of data analysis! 🌟 In this video, we're diving deep into Brivo's latest innovation - the Data Explorer, an AI-powered tool designed to revolutionize the way we approach data analysis. With the power of artificial intelligence, Data Explorer simplifies complex data sets, allowing you to uncover insights with minimal effort. 🧠💡

Tracing history: The generative AI revolution in SIEM

The cybersecurity domain mirrors the physical space, with the security operations center (SOC) acting as your digital police department. Cybersecurity analysts are like the police, working to deter cybercriminals from attempting attacks on their organization or stopping them in their tracks if they try it. When an attack occurs, incident responders, akin to digital detectives, piece together clues from many different sources to determine the order and details of events before building a remediation plan.

AI in Web Development: The Capability and Effectiveness of ChatGPT

The area of web development may be exhilarating and fascinating. Web developers build robust apps that support numerous users and fulfill a variety of functions by utilizing a variety of databases, frameworks, and programming languages. Even while it might be thrilling, developing a completely working website takes time and technical know-how.

CrowdStrike, Intel and Dell: Clustering and Similarity Assessment for AI-driven Endpoint Security with Intel NPU Acceleration

CrowdStrike’s mission is to stop breaches. We continuously research and develop technologies to outpace new and sophisticated threats and stop adversaries from pursuing attacks. We also recognize that security is best when it’s a team sport. In today’s threat landscape, technology collaboration is essential to deploy novel methods of analysis and defense.

LangFriend, SceneScript, and More - Monthly AI News

Memory integration into Large Language Model (LLM) systems has emerged as a pivotal frontier in AI development, offering the potential to enhance user experiences through personalized interactions. Enter LangFriend, a groundbreaking journaling app that leverages long-term memory to craft tailored responses and elevate user engagement. Let's explore the innovative features of LangFriend, which is inspired by academic research and cutting-edge industry practices.

IT Leaders Can't Stop AI and Deepfake Scams as They Top the List of Most Frequent Attacks

New data shows that the attacks IT feels most inadequate to stop are the ones they’re experiencing the most. According to Keeper Security’s latest report, The Future of Defense: IT Leaders Brace for Unprecedented Cyber Threats, the most serious emerging types of technologies being used in modern cyber attacks lead with AI-powered attacks and deepfake technology. By itself, this information wouldn’t be that damning.

Introducing Salt Security's New AI-Powered Knowledge Base Assistant: Pepper!

Going to a vendor's Knowledge Base (KB) is often the first place practitioners go to get the product deployed or troubleshoot issues. Even with advanced search tools, historically, KBs have been challenging to find relevant content quickly, and navigating a KB can be frustrating. At Salt Security, not only do we want to make your job of securing APIs easier, but we also want to make getting the guidance you need easier, friendlier and more efficient.

Securing AI with Least Privilege

In the rapidly evolving AI landscape, the principle of least privilege is a crucial security and compliance consideration. Least privilege dictates that any entity—user or system—should have only the minimum level of access permissions necessary to perform its intended functions. This principle is especially vital when it comes to AI models, as it applies to both the training and inference phases.

How Cato Uses Large Language Models to Improve Data Loss Prevention

Cato Networks has recently released a new data loss prevention (DLP) capability, enabling customers to detect and block documents being transferred over the network, based on sensitive categories, such as tax forms, financial transactions, patent filings, medical records, job applications, and more. Many modern DLP solutions rely heavily on pattern-based matching to detect sensitive information. However, they don’t enable full control over sensitive data loss.

AI - The Good, Bad, and Scary

AI and machine learning (ML) optimizes processes by making recommendations for optimizing productivity, reducing cycles, and maximizing efficiency. AI also optimizes human capital by performing mundane & repetitive tasks 24x7 without the need for rest and minimizing human errors. There are numerous benefits as to how AI can benefit society. As much as AI can propel human progress forward, it can be consequential to our own detriment without proper guidance.

Firewalls for AI: The Essential Guide

As the adoption of AI models, particularly large language models (LLMs), continues to accelerate, enterprises are growing increasingly concerned about implementing proper security measures to protect these systems. Integrating LLMs into internet-connected applications exposes new attack surfaces that malicious actors could potentially exploit.

Trustwave Embarks on an Extended Partnership with Microsoft Copilot for Security

Trustwave today announced it will offer clients expert guidance on implementing and fully leveraging the just-released Microsoft Copilot for Security, a generative AI-powered security solution that helps increase the efficiency and capabilities of defenders to improve security outcomes.