Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Technology

Adversarial Robustness in LLMs: Defending Against Malicious Inputs

Large Language Models (LLMs) are advanced artificial intelligence systems that understand and generate human language. These models, such as GPT-4, are built on deep learning architectures and trained on vast datasets, enabling them to perform various tasks, including text completion, translation, summarization, and more. Their ability to generate coherent and contextually relevant text has made them invaluable in the healthcare, finance, customer service, and entertainment industries.

Introducing Postman Collection Support for API Security Testing

In today's digital landscape, Application Programming Interfaces (APIs) play an important role in driving innovation. They allow teams to integrate new applications with existing systems, reuse code and deliver software more efficiently. But, APIs are also prime targets for hackers due to their public availability and the large amounts of web data they transmit. API vulnerabilities can lead to unauthorized access, data breaches, and various other forms of attacks.

Data Anonymization Techniques for Secure LLM Utilization

Data anonymization is transforming data to prevent the identification of individuals while conserving the data's utility. This technique is crucial for protecting sensitive information, securing compliance with privacy regulations, and upholding user trust. In the context of LLMs, anonymization is essential to protect the vast amounts of personal data these models often process, ensuring they can be utilized without compromising individual privacy.

BrowserGPT Review: The Ultimate ChatGPT Chrome Extension for Enhanced Web Productivity

In the constantly evolving digital landscape, BrowserGPT emerges as a beacon of innovation for enhancing productivity and efficiency online. As a comprehensive ChatGPT Chrome extension, BrowserGPT offers a unique set of features that seamlessly integrate into users' web browsing experiences. This review delves into the capabilities and functionalities of BrowserGPT, evaluating its potential to redefine how we interact with content on the web.

Kroll insights hub highlights key AI security risks

From chatbots like ChatGPT to the large language models (LLMs) that power them, managing and mitigating potential AI vulnerabilities is an increasingly important aspect of effective cybersecurity. Kroll’s new AI insights hub explores some of the key AI security challenges informed by our expertise in helping businesses of all sizes, in a wide range of sectors. Some of the topics covered on the Kroll AI insights hub are outlined below.

When Prompts Go Rogue: Analyzing a Prompt Injection Code Execution in Vanna.AI

In the rapidly evolving fields of large language models (LLMs) and machine learning, new frameworks and applications emerge daily, pushing the boundaries of these technologies. While exploring libraries and frameworks that leverage LLMs for user-facing applications, we came across the Vanna.AI library – which offers a text-to-SQL interface for users – where we discovered CVE-2024-5565, a remote code execution vulnerability via prompt injection techniques.

Cloud Security Compliance: Ensuring Data Safety in the Cloud

Modern organizations know that protecting their data is absolutely critical. That’s where cloud security compliance comes in. Satisfying regulatory standards helps organizations protect against unauthorized access and data breaches, as well as other security incidents. Beyond protecting data, compliance also protects organizations from the legal implications and financial effects of attacks.

Rapidly deliver trustworthy GenAI assistants with Motific

This demo highlights how Motific simplifies the journey of requesting a GenAI application, going through the approval process, connecting it with the right information sources, and provisioning an application to meet business requirements. With Motific, you can gain flexibility without complexity for easy deployments of ready-to-use AI assistants and APIs.

How to augment DevSecOps with AI?

Join us for a roundtable on GenAI's dual role in cybersecurity. Experts from GitGuardian, Snyk, Docker, and Protiviti, with Redmonk, discuss threat mitigation versus internal tool adoption, securing coding assistants, leveraging LLMs in supply chain security, and more. Gain valuable insights on harnessing GenAI to enhance your DevSecOps practices.