Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

CrowdStrike Expands ChatGPT Enterprise Integration with Enhanced Audit Logging and Activity Monitoring

As organizations scale ChatGPT Enterprise across departments, AI is becoming embedded in everyday business operations. Finance teams are building custom GPTs. Developers are leveraging Codex to act on codebases. Employees are invoking third-party tools within AI conversations to automate workflows. As adoption accelerates, security teams face a fundamental challenge: visibility around agents deployed and running in SaaS environments.

How bail bond scams are using AI to target families

Bail bond scams are getting smarter with AI. Here's how to spot them before they cost you thousands. A call saying someone you love has been arrested and needs money ASAP can feel so real that you act before you think. Learn how bail bond scams work and what to watch for to help protect you and your family from falling for the scheme. Getting a call about bail isn’t something most people prepare for, and that’s exactly what scammers count on.

Behavior Intelligence: The New Model for Securing the Agentic Enterprise

Behavior Intelligence is a security operations model that detects risk by analyzing behavior, automates investigation and response using AI, and measures whether security outcomes are improving over time. It focuses on how users, systems, and AI agents operate rather than relying only on predefined rules or knowns indicators of compromise. This shift matters because modern attacks rarely look malicious at first. They look normal.

Runtime Observability for LangChain and AutoGPT on Kubernetes

A platform team at a mid-size SaaS company runs three LangChain agents and one AutoGPT-derived planner on EKS. LangSmith is wired in. OpenTelemetry traces flow into their observability stack. Falco runs on every node. The setup is what most security teams would consider thorough. A pip dependency in one of the agents’ tool packages ships a malicious update.

AI Inference Server Observability in Kubernetes: The Four Signals MLOps Tools Don't Capture

In August 2025, a vulnerability chain in NVIDIA Triton Inference Server was found that allowed an unauthenticated remote attacker to send a single crafted inference request, leak the name of an internal shared memory region, register that region for subsequent requests, gain read-write primitives into the Triton Python backend’s private memory, and achieve full remote code execution. The exploit chain ran entirely through Triton’s standard inference API. No anomalous traffic volume.

Runtime Observability for MCP Servers: A Security Guide

Your security team sees an MCP tool server throw an error. Your APM dashboard shows a latency spike. Your logs capture the JSON-RPC request with its method name and parameters. But none of that tells you whether the tool just read a harmless config file or dumped credentials to an external IP. Traditional observability tools—the APM platforms, the OpenTelemetry traces, the centralized logging pipelines—track performance across your Model Context Protocol deployments.

Accelerating AI Discovery & Governance with the Falcon Platform

As AI adoption accelerates, so does shadow AI. Without a complete inventory of AI tools, agents, and activity, organizations are exposed to unapproved usage and data risk. In this video, you will see how the Falcon platform helps teams: Discover AI tools, models, and services in seconds Identify unapproved and risky usage See where AI is running and what it can access across endpoints Take action and enforce governance at scale.

The Research Behind Of Detecting And Attributing LLM-Generated Passwords - Gäetan Ferry

GitGuardian Senior Cybersecurity Researcher Gaetan Ferry’s latest research shows that AI-generated passwords are leaving fingerprints in the wild. In this interview, he explains how he used Markov chains, a century-old statistical model, to detect patterns in passwords generated by modern LLMs, attribute them to model families, and identify 28,000 likely LLM-generated passwords across public GitHub. The findings are a warning for teams adopting AI coding agents.

System Prompts Are Not Security Controls: A Deleted Production Database Proves It

On April 25th, a Cursor AI coding agent running Anthropic's Claude Opus 4.6, one of the most capable models in the industry, deleted the production database for PocketOS, a software platform used by car rental businesses across the country to manage their entire operations. The deletion took 9 seconds.