Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How to Apply NIST 800-53 to AI Systems

Matthew Smith is a vCISO and management consultant specializing in cybersecurity risk management and AI. Over the last 15 years, he has authored standards, guidance and best practices with ISO, NIST, and other governing bodies. Smith strives to create actionable resources for organizations seeking to minimize technological risk and increase value to customers.

Methods for Designing AI Identity | Teleport x The Cyber Hut

Three methods for issuing identity to AI agents — and why static credentials will always eventually leak no matter how well you vault them. Ev Kontsevoy breaks down standard credentials, durable identity, and digital twins, and explains why the issuer of identity needs to be the same across your entire environment.

The Need for Infrastructure Identity | Teleport x The Cyber Hut

Most organizations have identity over here and infrastructure over there — and they don't talk. By default, infrastructure has no identity. It's naked. Ev Kontsevoy explains why bringing identity into your infrastructure stack is a prerequisite for safe AI adoption — and what a trusted state actually looks like.

Video On Demand - Configuration Drift and the Risk of Misconfiguration

Misconfigurations can undermine security even on fully patched systems. In this webinar, CalCom’s Co-Founder and Director of Business Development Roy Ludmir explains what configuration vulnerabilities are, how configuration drift happens, and why it matters for both cyber risk and compliance. Questions? Want to talk about server hardening for your organization? Contact us at info@calcomsoftware.com.

Why Legacy Security Tools Fail to Protect Cloud AI Workloads

Your CNAPP flags a misconfigured service account. Your CSPM warns about an overly permissive IAM role. Your container scanner reports vulnerabilities in a model-serving image. But none of these tools can tell you that an AI agent just called an internal admin API it has never touched before — or that a prompt injection caused your LLM to leak customer data through a RAG connector.

AI Agent Escape Detection: How to Catch Agents Breaking Their Boundaries

Your SOC gets three alerts in quick succession: an unusual outbound connection from a container, a file read on a Kubernetes service account token, and a process spawn that doesn’t match the workload’s baseline. Three different tools, three separate dashboards, three tickets.

Signature Verification Bypass in Authlib (CVE-2026-28802): What Cloud Security Teams Need to Know

OAuth and OpenID Connect are the backbone of modern cloud-native identity and access management. From SaaS platforms and internal APIs to Kubernetes microservices, these protocols are responsible for verifying who is allowed to access what. When a vulnerability appears in a widely used authentication library, the impact can cascade across entire application ecosystems.

Introducing System Prompt Hardening: production-ready protection for system prompts

Today, we’re launching System Prompt Hardening, Mend.io’s new capability that defends the hidden instructions that control how your AI systems behave. Unlike user-facing prompts, system prompts live behind the scenes, and when attackers manipulate them, the result can be data leaks, policy bypasses, or unsafe model behavior. System prompt hardening stops those attacks at the source and gives security, engineering, and risk teams a practical, auditable way to secure AI in production.