Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Trustworthy AI Starts with Better Agents

The difference between an AI feature and an AI-led operating model becomes clear the moment a security problem becomes difficult. In real-world security operations — where the signal is ambiguous, the evidence spans multiple domains, and the attacker is behaving in unfamiliar ways — architecture matters much more.

Non-Human Identity Sprawl Is the Hidden Cost of AI Velocity

In the current AI boom, we race to use copilots, orchestration scripts, CI workflows, retrieval pipelines, and background jobs. Sometimes, we take for granted that every one of these things needs an identity. Service accounts. OAuth apps. API keys. Short-lived tokens. As AI velocity increases, so does the number of these non-human identities (NHIs). Instead of obsessing over model quality, latency, hallucinations, and GPU costs, we also need to consider how these identities impact security.

Agentic commerce is happening now. Here's what we've learned.

We’ve been collaborating with others to explore when and how agentic commerce will work. Robin Gandhi is the CPO of Lithic, a leading card issuer that’s already seeing agents use its cards to make purchases. Below, he shares his thoughts on what’s changed, and what needs to change, for agentic commerce to become mainstream. Last year, I wrote about the opportunity for agentic payments to revolutionize travel bookings, ad spend management, procurement, and more.

AI can do what now?! - Detecting financial fraud with Elastic Security

Financial fraud is increasingly cyber-enabled, requiring organizations to detect complex campaigns across transactions, identities, and digital systems faster and with greater accuracy. Join cybersecurity experts Lisa Jones-Huff and Joe Murin as they discuss how Elastic Security applies AI, machine learning, and generative AI to modern fraud detection. They’ll share how Elastic Security helps teams connect signals, reduce noise, accelerate investigations, and scale fraud prevention through emerging frameworks and standards across financial services organizations.

How Charlotte AI AgentWorks Fuels Security's Agentic Ecosystem

The era of human-speed defense is over. With eCrime breakout times collapsing to as fast as 27 seconds and attacks from AI-powered adversaries increasing 89% year-over-year, the traditional SOC has reached a breaking point. Manual processes, fragmented tools, and rule-based playbooks were built for a different era. Today, if your defense depends on human reaction time, you’re not just behind — you’re at risk.

Unify Kubernetes, VMs, and AI with VCF 9

Managing modern IT infrastructure often feels like balancing completely different ecosystems. For years, organizations have run separate, hand-built, Kubernetes stacks on top of legacy virtualization platforms. Due to security concerns, it just made sense to build a separate, tailored container environment that they could automate and schedule their exact needs. This fragmented approach leads to inconsistent security policies, fragile integrations between clusters, and operational silos.

Spring 2026 GenAI Code Security Update: Despite Claims, AI Models Are Still Failing Security

The last six months have been nothing short of revolutionary for AI-powered coding. OpenAI‘s “Code Red” release brought us GPT-5.1 and 5.2. Google unveiled Gemini 3 with its touted “unprecedented reasoning capabilities.” Anthropic rolled out Claude 4.5 and 4.6, powering the increasingly ubiquitous Claude Code features. Enterprise adoption of tools like OpenClaw has exploded, with developers praising unprecedented productivity gains.

The AI Control Gap: Why Partners Are Now on the Front Line

For channel partners, AI has quickly moved from a future conversation to a current customer problem. Clients are already using AI across their organisations, often faster than governance can keep up. What’s emerging is not just another technology trend, but a new class of risk that customers cannot fully see or control. Our latest research, based on insights from senior security leaders in highly regulated industries, highlights the scale of the issue.

The Library That Holds All Your AI Keys Was Just Backdoored: The LiteLLM Supply Chain Compromise

We just published a deep breakdown of the Trivy supply chain attacks yesterday. Twenty-four hours later, we’re writing about the next one. Same threat actor. Different target. Worse implications. This time it’s LiteLLM, the Python library that acts as a universal API gateway for over 100 LLM providers. If you’re building anything with AI agents, MCP servers, or LLM orchestration, there’s a good chance LiteLLM is somewhere in your dependency tree.