Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI Bias Is More Dangerous Than You Think #shorts

AI bias is a real problem. Bias can enter AI systems in many ways: That’s why governments and organizations are focusing on responsible AI policies to ensure AI benefits everyone equally, not just one group. Responsible AI means reducing discrimination and ensuring fairness across all communities. Watch The Full Podcast: Link Below.

When AI Stops Assisting and Starts Acting

For decades, the service desk has operated on a simple assumption: humans must interpret every IT problem before action can be taken. A ticket is created. Teams investigate. Data is pulled from multiple tools. Eventually someone determines the root cause and decides what to do next. It works - but it's slow, reactive, and heavily manual. That assumption is starting to change. With Tanium AI agents in ServiceNow Now Assist for ITSM connected to Tanium's real-time endpoint intelligence, machines can now understand issues, analyze live telemetry, and recommend or execute remediation in seconds.

eBPF for AI Agent Enforcement: What Kernel-Level Security Catches (and What It Misses)

Your team deployed Tetragon six months ago. TracingPolicies are humming along—you’re catching unauthorized binary executions, blocking suspicious network connections, and generating seccomp profiles from observed behavior. Runtime security for your traditional workloads is solid. Then engineering ships their first autonomous AI agent into production. A LangChain agent connected to internal databases, external APIs through MCP tool runtimes, and a vector database for RAG.

Securing AI Agents on GKE: Where gVisor, Workload Identity, and VPC Service Controls Stop Working

You enable GKE Sandbox on a dedicated node pool, bind Workload Identity Federation to your AI agent pods, wrap your data services in a VPC Service Controls perimeter, and deploy your agents with the Agent Sandbox CRD using warm pools for sub-second startup. Your security posture dashboard shows every control configured and active. And then an attacker uses prompt injection to trick an agent into exfiltrating sensitive data through API calls that every single one of those layers explicitly allows.

4 Phases, 357 Crashes, 2 Bugs: What AFL++ Campaign Actually Looks Like

357 crash files. 2 real bug sites. That’s the outcome of this AFL++ campaign after roughly 8.5 billion executions across multiple harnesses, binaries, and phases. At first glance, everything looked like success. Crashes were increasing steadily. New inputs were being generated every few seconds. Coverage appeared to improve over time. From a surface-level perspective, the campaign looked productive. Then triage began.

Gemini XSS Vulnerability: When AI Executes Malicious Code

Artificial intelligence is no longer just generating text. It generates and executes code in real time. With tools like Google Gemini, features such as code canvases and live previews are turning AI systems into interactive execution environments. This shift introduces a new and rapidly growing category of risk: AI security vulnerabilities tied to real-time code execution.

AI Takes Over RSAC Conference (Now What?) with Dave Bittner

In this RSAC 2026 Conference recap, Dave Bittner, Host of the CyberWire Daily podcast, joins Data Security Decoded host Caleb Tolin from the guest seat to discuss the biggest theme dominating the conference: artificial intelligence, and, more specifically, agentic AI. From wall-to-wall AI messaging across San Francisco to in-depth conversations with security leaders and analysts, one thing became clear: the industry has moved past debating whether AI will take hold. It already has. Now, the focus has shifted to making it safe.

Is Your Patch Management Strategy Ready for AI-Powered Attacks? | Nishith Datta | Titan

In this Episode of Guardians of the Enterprise, Ashish Tandon, Founder & CEO, Indusface and Nishith Datta, Head of Cybersecurity at Titan, discusses one of the most pressing challenges in modern security, vulnerability patching in the age of AI. As AI accelerates both the scale and sophistication of attacks, traditional patching cycles are no longer enough. Nishith shares his frontline perspective on how enterprises securing omnichannel consumers must rethink their approach to exposure management.