Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How Blockchain Is Reshaping Banking Infrastructure

Blockchain adoption in banking is moving from experimentation to production. In this session, Fireblocks Financial Markets Economist Neil Chopra breaks down where banks, fintechs, and non-bank competitors are already live, what wallet infrastructure means for onchain ownership and control, and why stablecoins are proving the utility case that's pulling the rest of the market forward.

Exploited Before CISA KEV: What 8 Confirmed Cases Reveal

Most vulnerability programs are built to act when risk looks obvious, such as when a vulnerability lands in CISA KEV, a public exploit emerges, or EPSS rises. This approach is rational because it provides a clear, defensible trigger for action. But it often comes with delay: by the time signals are strong enough to drive consensus, the window to get ahead of risk may already be closing.

Prompt and Tool Call Visibility: What Your AI Agents Are Actually Doing

It is 11:47 p.m. and the on-call security engineer is staring at two dashboards. On the left, LangSmith — the ML team’s debugging stack — showing the agent’s prompts, model responses, tool calls, and tokens consumed. On the right, the runtime detection console showing eBPF-captured syscalls, network connections, and process trees from the same Pod. Both are populated.

AI-SPM for Financial Services: Managing AI Risk Under SOC2, PCI-DSS, and MAS TRM

The external auditor’s evidence request lands Tuesday morning. A security architect at a Tier 1 bank pulls up her AI-SPM dashboard for the SOC2 Type 2 review. Eighty-three AI agents running across the bank’s clusters. For each one, the dashboard shows the current configuration and the current behavioral baseline. The data is accurate, comprehensive, and point-in-time.

'Mini Shai-Hulud' supply chain attack targets SAP npm packages

On April 29, 2026, security researchers detailed a campaign known as ‘mini Shai-Hulud’ that involves compromised versions of npm packages used in SAP’s Cloud Application Programming Model (CAP). The malicious packages reportedly contain functionality to steal sensitive data such as credentials. The stolen data is encrypted and exfiltrated via public GitHub repositories. The maintainers of known-compromised packages have released updated versions.

The Metric AI Security is Missing

As autonomous and semi-autonomous AI systems take on more responsibility within the enterprise, they shift from being “features” of software to becoming true internal actors. They make decisions, take actions, call tools, orchestrate workflows, and influence other AI agents. With this evolution, we must confront an uncomfortable truth: the metrics and response patterns we built for deterministic software no longer work.

Beyond the Build: Dynamic Remediation for Malicious Package Versions

In the fast-moving world of software supply chains, the discovery of a malicious version of a popular library often triggers a state of emergency. Traditional security tools take a reactive approach: they scan, they find a match, and they fail the build. But what happens if the malicious version was merged before it was flagged? What if it’s already running in your production containers? Or what if it’s being pulled dynamically across hundreds of different pipelines?

Emerging Threat: (CVE-2026-3854) GitHub Enterprise Server RCE via Git Push Injection

CVE-2026-3854 is a command injection vulnerability in GitHub Enterprise Server. It lives in the git push pipeline. User-supplied push option values were not properly sanitized before being embedded in an internal service header. The header format used a delimiter that could also appear in user input. A crafted push option containing that delimiter let an attacker inject additional metadata fields. Downstream services treated those fields as trusted internal values.

How Zero Standing Privileges Defuses the Shadow AI Agent Problem

As more organizations move past experimentation and start planning real AI agent deployments, the same set of concerns keeps surfacing in our conversations with security teams. Whether the worry is a shadow agent that shows up uninvited or a sanctioned agent going rogue, the questions tend to cluster around control: These are the right questions to be asking, and they share a common answer that’s more concrete than most people expect. AI agents are only as dangerous as the privileges they can reach.