Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Latest News

AI Regulations and Governance Monthly AI Update

In an era of unprecedented advancements in AI, the National Institute of Standards and Technology (NIST) has released its "strategic vision for AI," focusing on three primary goals: advancing the science of AI safety, demonstrating and disseminating AI safety practices, and supporting institutions and communities in AI safety coordination.

Adversarial Robustness in LLMs: Defending Against Malicious Inputs

Large Language Models (LLMs) are advanced artificial intelligence systems that understand and generate human language. These models, such as GPT-4, are built on deep learning architectures and trained on vast datasets, enabling them to perform various tasks, including text completion, translation, summarization, and more. Their ability to generate coherent and contextually relevant text has made them invaluable in the healthcare, finance, customer service, and entertainment industries.

Data Anonymization Techniques for Secure LLM Utilization

Data anonymization is transforming data to prevent the identification of individuals while conserving the data's utility. This technique is crucial for protecting sensitive information, securing compliance with privacy regulations, and upholding user trust. In the context of LLMs, anonymization is essential to protect the vast amounts of personal data these models often process, ensuring they can be utilized without compromising individual privacy.

BrowserGPT Review: The Ultimate ChatGPT Chrome Extension for Enhanced Web Productivity

In the constantly evolving digital landscape, BrowserGPT emerges as a beacon of innovation for enhancing productivity and efficiency online. As a comprehensive ChatGPT Chrome extension, BrowserGPT offers a unique set of features that seamlessly integrate into users' web browsing experiences. This review delves into the capabilities and functionalities of BrowserGPT, evaluating its potential to redefine how we interact with content on the web.

Kroll insights hub highlights key AI security risks

From chatbots like ChatGPT to the large language models (LLMs) that power them, managing and mitigating potential AI vulnerabilities is an increasingly important aspect of effective cybersecurity. Kroll’s new AI insights hub explores some of the key AI security challenges informed by our expertise in helping businesses of all sizes, in a wide range of sectors. Some of the topics covered on the Kroll AI insights hub are outlined below.

When Prompts Go Rogue: Analyzing a Prompt Injection Code Execution in Vanna.AI

In the rapidly evolving fields of large language models (LLMs) and machine learning, new frameworks and applications emerge daily, pushing the boundaries of these technologies. While exploring libraries and frameworks that leverage LLMs for user-facing applications, we came across the Vanna.AI library – which offers a text-to-SQL interface for users – where we discovered CVE-2024-5565, a remote code execution vulnerability via prompt injection techniques.

BlueVoyant Awarded Microsoft Worldwide Security Partner of the Year, Recognizing Leading-Edge Cyber Defense

We are over the moon to share that BlueVoyant has been awarded the Microsoft Worldwide Security Partner of the Year, demonstrating our leading-edge cyber defense capabilities and our strong partnership with Microsoft. We have also been recognized as the Microsoft United States Security Partner of the Year for the third time, and the Microsoft Canada Security Partner of the Year for the first time.

The Importance of AI Penetration Testing

Penetration Testing, often known as "pen testing," plays a pivotal role in assessing the security posture of any digital environment. It's a simulated cyber attack where security teams utilise a series of attack techniques to identify and exploit vulnerabilities within systems, applications, and an organisation’s infrastructure. This form of testing is crucial because it evaluates the effectiveness of the organisation's defensive mechanisms against unauthorized access and malicious actors.

Breaking down BEC: Why Business Email Compromise is More Popular Than Ever

Cybersecurity moves fast, and the latest threats to reach organizations worldwide are being built on the back of artificial intelligence (AI) models that spit out accurate code, realistic messages, and lifelike audio and video designed to fool people. But as headline-grabbing as AI-based attacks appear to be, they aren’t driving the most breaches globally. That would be BEC attacks, in which attackers leverage stolen access to a business email account to create a scam that results in financial gain.