Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Technology

Responsible AI Licenses (RAIL): Here's What You Need to Know

Responsible AI Licenses (RAIL) are a class of licenses created with the intention of preventing harmful or unethical uses of artificial intelligence while also allowing for the free and open sharing of models between those who intend to use and improve them for authorized purposes. Anyone can make their own version of RAIL for their model, and in doing so can create more or less restrictions than those detailed in the template licenses.

Comparing OPA/Rego to AWS Cedar and Google Zanzibar

Rego, the policy language of the Open Policy Agent (OPA), is known for its flexibility and power in policy enforcement across various systems. Its declarative syntax and data-centric approach make it versatile for application authorization, infrastructure as code (IaC) authorization, and network policies. To fully appreciate OPA/Rego’s capabilities, it’s helpful to compare it with other policy languages and frameworks like AWS’s Cedar and Google’s Zanzibar.

Safeguarding LLMs in Sensitive Domains: Security Challenges and Solutions

Large Language Models (LLMs) have become indispensable tools across various sectors, reshaping how we interact with data and driving innovation in sensitive domains. Their profound impact extends to areas such as healthcare, finance, and legal frameworks, where the handling of sensitive information demands heightened security measures.

Cloud Unfiltered with Andre Zayarni - Exploring AI and Vector Databases - Episode 13

Join your host Michael Chenetz as he interviews André Zayarni, the CTO of Qdrant, a leader in AI innovation with their cutting-edge vector database technology. This conversation is essential listening for anyone interested in the integration of advanced search technologies and AI in modern applications.

AI's Role in Securing AEC Data: Paving the Path Forward

In the oft-obscure world of Architecture, Engineering, and Construction (AEC), the structures we see reaching for the skyline are not just feats of design and engineering but archives of data, each rivet and beam a data point in a colossal network of information. Yet, with these digital monoliths comes an invisible vulnerability – data control, a challenge that’s upending the AEC industry.

Nightfall's Firewall for AI

From customer service chatbots to enterprise search tools, it’s essential to protect your sensitive data while building or using AI. Enter: Nightfall’s Firewall for AI, which connects seamlessly via APIs and SDKs to detect sensitive data exposure in your AI apps and data pipelines. With Nightfall’s Firewall for AI, you can… … intercept prompts containing sensitive data before they’re sent to third-party LLMs or included in your training data.

Malicious Use of Generative AI Large Language Models Now Comes in Multiple Flavors

Analysis of malicious large language model (LLM) offerings on the dark web uncovers wide variation in service quality, methodology and value – with some being downright scams. We’ve seen the use of this technology grow to the point where an expansion of the cybercrime economy occurred to include GenAI-based services like FraudGPT and PoisonGPT, with many others joining their ranks.

What is Defensive AI and Why is it Essential in Bot Protection?

The definition of Artificial Intelligence (AI) has been thrown around whilst it has risen to the top of the tech agenda over the past couple of years. Security professionals have determined AI to be a risk to businesses, and also an opportunity. But could it also be a way to better defend your network against attacks? For many years, AI and Machine Learning have gone hand in hand; with AI used to better determine defensive decisions and cut down on the human element in more basic functions.