Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Beyond the Prompt: Data Security in Generative AI Platforms

Generative AI tools have changed how people work and play online. Everyone is excited about the speed and creativity these systems offer. Users often type sensitive info into prompts without thinking about where it goes. Security experts worry about how these platforms handle personal data. It is easy to forget that anything typed into a public bot might be stored. Staying safe means knowing how to use these tools without giving away secrets.

Building Know Your Agent: The missing identity layer for agentic commerce

AI agents are being deployed in the real world at pace. In the enterprise realm, they’re accessing APIs, shipping code, and running decisioning workflows on behalf of the organizations and individuals who deploy them. Entirely new businesses have sprung up, leveraging AI agents to streamline customer support and sales processes.

LimaCharlie is the most secure way to run AI security agents

The idea that AI agents will run security operations is becoming reality. But most platforms ignore the most important question: how do you secure the agents themselves? In this video I walk through why LimaCharlie is the most secure platform for running agentic security operations and demonstrate the architectural controls that make it possible. We look at the core mechanisms that allow AI agents to operate safely inside a SecOps environment, including.

Agentic AI at risk after MCP design flaw discovery? #ai #cybersecurity #podcast

In this week's Intel Chat, Chris Luft and Matt Bromiley discuss a design flaw in Anthropic's Model Context Protocol (MCP) that could enable large-scale supply chain attacks on agentic AI systems. Researchers at OX Security found that MCP's command execution allows malicious commands to run silently without sanitization checks or warnings.

AI Agent Sandboxing in Financial Services: Containing Blast Radius

Your progressive enforcement rollout is working. eBPF sensors are deployed across the cluster. Behavioral baselines are converging. Enforcement policies are generating from observed behavior, just like the observe-to-enforce methodology prescribes. Then your compliance officer walks over to the platform team’s desks and asks a question nobody anticipated: “Which agents are in observation mode right now?”

How to Detect AI-Mediated Data Exfiltration in the Cloud

Your SOC gets an alert from the CNAPP: an outbound connection from a pod in the ai-prod namespace to . The destination is in the allowlist. The payload size is 28 kilobytes — well under the DLP threshold. The agent’s service account has permission to invoke the email tool. By every check your stack runs, the traffic is normal. Forty minutes later, a customer support lead notices that an email went out containing a summary of 2,400 customer records that the agent had no business querying.

If "stdio" is a Vulnerability, So Is "git clone" - Notes on Riding the AI Vulnerability Trend

A developer clones a repository and opens it in VS Code at 10:47 a.m. Before their cursor blinks, six different configuration file formats on disk have a chance to execute shell commands on the host. A.vscode/tasks.json with runOn: folderOpen. A.devcontainer/devcontainer.json with initializeCommand. A post-checkout hook already sitting in.git/hooks/. A postinstall line waiting in package.json for the next dependency install. A.envrc in the project root.

Unlock the Power of Agents with JFrog's Skills and MCP Tools

Agents are writing code, suggesting dependencies, and reviewing PRs, without any knowledge about your trusted package sources, security posture, or governance policies. When agents operate without supply chain context, they introduce risk, create rework, and weaken the guardrails DevSecOps teams rely on to ship with confidence. JFrog is changing that.

The Three-Layer Strategy for Autonomous Agent Governance with Joe Hladik and Amit Malik

The race for AI dominance has created a dangerous imbalance between business velocity and cyber resilience. In this episode, host Caleb Tolin is joined by ⁠Joe Hladik⁠, Head of ⁠Rubrik⁠ Zero Labs, and Staff Security Researcher ⁠Amit Malik⁠ to break down the findings of their latest report on agentic adoption. The discussion centers on the Agentic Paradox. This is the technical reality that tools designed to automate high-level tasks are inherently built to find the most efficient path around obstacles, including existing security policies.