Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

I Tried 5 Prompt Injection Attacks (Here's What Happened)

In this video, we explore the growing security risk of prompt injection in large language model (LLM) applications. As AI becomes embedded in more products, new vulnerabilities emerge, especially through natural language manipulation. We break down how LLMs work, the importance of system prompts, and demonstrate five real-world prompt injection techniques used to extract sensitive information or bypass safeguards. You’ll see live examples using different models and learn why newer models are more resilient, but still not immune.

CVE-2026-0968: The libssh Heap Read That Isn't as Scary as Scanners Say

A missing null check in libssh’s SFTP directory listing code lets a malicious server crash clients, but real-world exploitability is extremely constrained. CVE-2026-0968 is an out-of-bounds heap read in sftp_parse_longname(), triggered when an SFTP client processes a crafted SSH_FXP_NAME response with a malformed longname field. Red Hat, which serves as the CNA (CVE Numbering Authority) for this vulnerability, scored it 3.1 (Low), while Amazon Linux independently scored it 4.2 (Medium).

AI Workload Baseline and Drift Detection: Defining "Normal" Agent Behavior

Security teams deploying AI agents into Kubernetes know they need behavioral baselines. The concept is straightforward: define what “normal” looks like for each agent, then detect when behavior drifts in ways that suggest compromise. The problem is that AI agents are designed to change. A model update alters inference latency. A prompt revision shifts tool-calling sequences. A new MCP integration adds API destinations nobody flagged during the last security review.

How to Triage an AI Agent Execution Graph: A Three-Tier Decision Framework for Security Teams

A platform security engineer gets an alert at 2:14 a.m. One of the LangChain agents running in their production Kubernetes cluster has produced an execution graph with eleven nodes, seven tool calls, and an egress edge to a domain that is not in the agent’s approved integration list. The chain is fully rendered in their console. Every signal is there.

The CISO's AI Agent Production Approval Checklist: 7 Gates to Clear Before Go-Live

Your engineering lead is in your office Thursday morning. They want to push an AI agent to production next Tuesday. It’s a LangChain-based workflow agent, connected through MCP to three internal tools and one external API, with access to a customer database. The framework posters are on the wall. Your team has spent two quarters standing up runtime observability. And sitting in that chair, you still don’t know whether to say yes.

Auditing Agentic Behavior for FedRAMP Compliance | Teleport

AI agents are tireless, highly capable, eager to please, but difficult to manage. George Chamales (CriticalSec) and Josh Rector (Ace of Cloud) unpack the identity and access challenges posed by agentic AI. How do you verify it was the right agent, doing the right action, approved by the right person? How do we bound, constrain, govern agentic behavior? Ultimately, the same frameworks built for human identity and access should be applied to agents.

From Plaintext, to BLESS, to Identity: The Evolution of Secure Remote Access

My first introduction to UNIX remote access was via telnet and rsh protocols in college, which was the standard method at the time. But I soon started reading articles about how easy it was for someone to sniff the network and capture passwords since they were being transmitted in plaintext. On the shared network segments common to university campuses and early enterprise environments, the tools to intercept traffic were freely available, well-documented, and required very little skill to use.

CI/CD security: How to secure your GitHub ecosystem

In Part 1 of this series, we discussed the CI/CD security boundary, mapped out potential attack vectors with a CI/CD threat matrix, and introduced a simple threat model focused on ideating detection workflows. In this post, we’ll apply these principles to a real-world source code management (SCM) tool example that every developer is familiar with: GitHub. In addition to threat modeling, we’ll also be taking a closer look at historical attacks on GitHub and GitHub Actions ecosystems.