Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Latest News

LangFriend, SceneScript, and More - Monthly AI News

Memory integration into Large Language Model (LLM) systems has emerged as a pivotal frontier in AI development, offering the potential to enhance user experiences through personalized interactions. Enter LangFriend, a groundbreaking journaling app that leverages long-term memory to craft tailored responses and elevate user engagement. Let's explore the innovative features of LangFriend, which is inspired by academic research and cutting-edge industry practices.

IT Leaders Can't Stop AI and Deepfake Scams as They Top the List of Most Frequent Attacks

New data shows that the attacks IT feels most inadequate to stop are the ones they’re experiencing the most. According to Keeper Security’s latest report, The Future of Defense: IT Leaders Brace for Unprecedented Cyber Threats, the most serious emerging types of technologies being used in modern cyber attacks lead with AI-powered attacks and deepfake technology. By itself, this information wouldn’t be that damning.

Introducing Salt Security's New AI-Powered Knowledge Base Assistant: Pepper!

Going to a vendor's Knowledge Base (KB) is often the first place practitioners go to get the product deployed or troubleshoot issues. Even with advanced search tools, historically, KBs have been challenging to find relevant content quickly, and navigating a KB can be frustrating. At Salt Security, not only do we want to make your job of securing APIs easier, but we also want to make getting the guidance you need easier, friendlier and more efficient.

Securing AI with Least Privilege

In the rapidly evolving AI landscape, the principle of least privilege is a crucial security and compliance consideration. Least privilege dictates that any entity—user or system—should have only the minimum level of access permissions necessary to perform its intended functions. This principle is especially vital when it comes to AI models, as it applies to both the training and inference phases.

How Cato Uses Large Language Models to Improve Data Loss Prevention

Cato Networks has recently released a new data loss prevention (DLP) capability, enabling customers to detect and block documents being transferred over the network, based on sensitive categories, such as tax forms, financial transactions, patent filings, medical records, job applications, and more. Many modern DLP solutions rely heavily on pattern-based matching to detect sensitive information. However, they don’t enable full control over sensitive data loss.

Firewalls for AI: The Essential Guide

As the adoption of AI models, particularly large language models (LLMs), continues to accelerate, enterprises are growing increasingly concerned about implementing proper security measures to protect these systems. Integrating LLMs into internet-connected applications exposes new attack surfaces that malicious actors could potentially exploit.

Trustwave Embarks on an Extended Partnership with Microsoft Copilot for Security

Trustwave today announced it will offer clients expert guidance on implementing and fully leveraging the just-released Microsoft Copilot for Security, a generative AI-powered security solution that helps increase the efficiency and capabilities of defenders to improve security outcomes.