Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI Threat Detection for Financial Services: Detecting AI-Driven Fraud and Data Exfiltration

A Tier 1 bank’s security architecture already spends heavily on detection. On one side sits the financial surveillance stack — fraud scoring platforms processing thirty thousand transactions an hour, AML monitoring watching money movement patterns, DLP engines scanning data in transit, payment anomaly detection tuned by a decade of production signal.

How Claude Helped Build a Proxmox Environment (and What I Learned Along the Way)

As a solutions architect, building out customer demo environments is part of the job. I regularly spin up lab scenarios to support evaluations and proof-of-concept work — and if you've done this before, you know it can eat up days of your life. So when I recently decided to refresh my homelab and migrate to Proxmox, I saw it as the perfect opportunity to put AI-assisted infrastructure automation to the test. The goal?

How to Identify and Reduce Excessive Permissions in AI Workloads

Your CIEM report came back clean this morning. Every AI agent in the cluster is exercising its granted permissions — no idle roles, no service accounts with broad scope and a handful of API calls behind them, nothing that looks obviously over-provisioned. The dashboard is green, and by the diagnostic your tool was built on, it should be.

The Butlerian Jihad: Compromised Bitwarden CLI Deploys npm Worm, Poisons AI Assistants, and Dumps GitHub Secrets

Part 1 covered CanisterWorm, the self-spreading npm worm. Part 2 covered the malicious LiteLLM package. Part 3 covered the telnyx WAV steganography attack. Part 4 covered the xinference AI inference attack. This post covers: a compromised @bitwarden/cli package that combines a self-propagating npm worm, a GitHub Actions secrets dumper, and a novel AI assistant poisoning technique.

AI Agent Security Framework on GKE: Implementation Guide

Your platform team spent a week configuring the Agent Sandbox CRD on a gVisor-enabled node pool — the architecture Google positions as the recommended pattern for AI agent workloads on GKE. Workload Identity Federation with KSA principals is bound to every agent pod. Container Threat Detection is licensed and active in Security Command Center Premium. And the runtime behavioral sensor you budgeted for won’t install.

How Healthcare Platform Teams Should Secure AI Agents on Kubernetes

The surgeon is thirty-two minutes into a procedure. The ambient scribe pod listening to the operating room is mid-encounter — transcribing, retrieving prior chart context, drafting the operative note for post-op sign-off. At the same moment, your SOC gets an alert: anomalous tool invocation from that pod, elevated egress volume, behavioral deviation from the agent’s baseline.

Detecting Threats in Multi-Agent Orchestration Systems: LangChain, CrewAI, and AutoGPT

It’s Tuesday morning at a mid-size fintech. A customer-support workflow runs on CrewAI in production: a Triage agent reads tickets, a Records agent pulls customer history, a Remediation agent drafts and sends the reply. A user submits a ticket with a pasted error log containing an indirect prompt injection. Triage summarizes and delegates. Records, interpreting instructions embedded in the summary, pulls 2,400 customer records instead of one.

GitGuardian Can Now Monitor Your Gerrit Repositories To Help You Fight Secrets Sprawl

In this video, Romain Jouhannet, Product Manager at GitGuardian, talks with Dwayne McDaniel, Developer Advocate at GitGuardian about the platform's new native support for Gerrit as a VCS source. Gerrit is widely used for enterprise code review workflows, often hosting sensitive internal repositories. You can now connect your Gerrit instance to GitGuardian to detect secrets exposed across your repositories and commit histories, with the same experience as our other VCS integrations.

A Poisoned Xinference Package Targets AI Inference Servers

Part 1 covered CanisterWorm. Part 2 covered the malicious LiteLLM package. Part 3 covered the Telnyx WAV steganography attack. This post covers the latest wave: three malicious versions of xinference on PyPI, carrying the same credential-stealing playbook and a plot twist. On April 22, 2026, Mend.io’s threat detection identified malicious versions of xinference on PyPI: 2.6.0, 2.6.1, and 2.6.2.

Implementing AI Agent Security on Azure AKS: A Practical Guide

Your platform team deployed eBPF-based runtime sensors on AKS last week. Defender for Containers is enabled. Azure Policy is enforcing pod security standards across your AI workload namespaces. And your Observe pillar is still blind — because nobody enabled the Diagnostic Setting that routes kube-audit logs to the Log Analytics workspace where your tooling can actually consume them.