Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Latest News

Enabling Secure AI Innovations by Citizen Developers

Technology can change in the blink of an eye, and nowhere is this more evident than in the rise of “citizen developers.” Often without formal technical training, these individuals leverage user-friendly platforms to create, innovate, and deploy AI-driven solutions. But with the support of intuitive interfaces, templates, and code snippets come challenges. Security can be a challenge hidden in the simplicity of drag-and-drop designs.

Navigating Data Privacy for GenAI in Customer Support

As the adoption of generative AI (GenAI) accelerates across enterprises, one of the most promising applications emerges in customer support. GenAI enables automated responses, allowing businesses to engage in natural conversations with customers and provide real-time chat support. However, this convenience comes with inherent risks, particularly concerning data privacy.

Nightfall AI releases GenAI-powered Sensitive Data Protection for the enterprise

The modern enterprise relies on hundreds of SaaS apps, email services, generative AI (GenAI) tools, custom apps, and LLMs, which often contain sensitive data. For too long, security teams have been forced to patch together point solutions for coverage across these channels, increasing their workloads and creating opportunities for sensitive data to slip through the cracks. This is precisely where Nightfall’s single-pane-of-glass solution comes into play: With Nightfall Sensitive Data Protection.

Mitigating a token-length side-channel attack in our AI products

Since the discovery of CRIME, BREACH, TIME, LUCKY-13 etc., length-based side-channel attacks have been considered practical. Even though packets were encrypted, attackers were able to infer information about the underlying plaintext by analyzing metadata like the packet length or timing information. Cloudflare was recently contacted by a group of researchers at Ben Gurion University who wrote a paper titled “What Was Your Prompt?

Beyond the Hype: How Torq's AI-Driven Innovations Are Transforming Security Automation

It has been over a year and a half since the latest generative AI revolution descended upon the world. All IT markets have seen a wave of both new AI products, as well as AI-driven capabilities in existing products being introduced with a breakneck pace.

Security Flaws within ChatGPT Ecosystem Allowed Access to Accounts On Third-Party Websites and Sensitive Data

Salt Labs researchers identified generative AI ecosystems as a new interesting attack vector. vulnerabilities found during this research on ChatGPT ecosystem could have granted access to accounts of users, including GitHub repositories, including 0-click attacks.

AI-Driven Voice Cloning Tech Used in Vishing Campaigns

Scammers are using AI technology to assist in voice phishing (vishing) campaigns, the Better Business Bureau (BBB) warns. Generative AI tools can now be used to create convincing imitations of people’s voices based on very small audio samples. “At work, you get a voicemail from your boss,” the BBB says. “They instruct you to wire thousands of dollars to a vendor for a rush project. The request is out of the blue. But it’s the boss’s orders, so you make the transfer.

Five Principles for the Responsible Use, Adoption and Development of AI

We have been fantasising about artificial intelligence for a long time. This obsession materialises in some cultural masterpieces, with movies or books such as 2001: A Space Odyssey, Metropolis, Blade Runner, The Matrix, I, Robot, Westworld, and more. Most raise deep philosophical questions about human nature, but also explore the potential behaviours and ethics of artificial intelligence, usually through a rather pessimistic lens.

Chatbot security risks continue to proliferate

While the rise of ChatGPT and other AI chatbots has been hailed as a business game-changer, it is increasingly being seen as a critical security issue. Previously, we outlined the challenges created by ChatGPT and other forms of AI. In this blog post, we look at the growing threat from AI-associated cyber-attacks and discuss new guidance from the National Institute of Standards and Technology (NIST).