Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

WebPromptTrap - New Indirect Prompt Injection Vulnerability in BrowserOS

Cato researchers have discovered a new indirect prompt injection exploit pattern workflow in BrowserOS (an open-source agentic AI browser). We named it “WebPromptTrap” because the prompt originates from untrusted web content and it traps users into approving an authorization step through a trusted-looking AI summary.

How Connected Vehicles and AI Are Redefining Insurance and Digital Security Risks

The way we drive is changing. Cars are no longer just machines that take us from one place to another. They are now connected systems that collect data, communicate with networks, and use artificial intelligence to improve safety and performance. These connected vehicles are transforming industries like insurance and cybersecurity in ways we are only beginning to understand.

The Shift to Continuous Context and the Rise of Guardian Agents

AI agent risk doesn’t emerge in a single moment. It develops over time across configuration changes, runtime behavior, long-horizon tasks, and interactions between agents, users, and enterprise systems. Their behavior and exposure can shift in real time as agents rewrite instructions, update memory, and dynamically alter execution.

BewAIre: Detecting Malicious Pull Requests at Scale with LLMs

As AI coding assistants accelerate software development, the volume of pull requests at Datadog has grown to nearly 10,000 per week, increasing the risk that malicious changes slip through due to review fatigue. To address this, Datadog built BewAIre, an LLM-powered code review system designed to identify malicious source code changes introduced by threat actors. By reducing approval fatigue for developers while increasing friction for attackers, BewAIre guides human reviewers to the areas where judgment matters most, without slowing developer velocity.

Homomorphic Encryption in LLM Pipelines: Why It Fails in 2026

There’s a claim gaining traction in the market: homomorphic encryption can preserve data privacy in AI workflows. Encrypt your data, run it through a language model, and never expose a single token. Sounds bulletproof. It isn’t. Homomorphic encryption (HE) was built for math, not language. Applying it to LLM pipelines is like encrypting a book and asking someone to summarize it without reading a word. The problem isn’t efficiency.

How to Manage Identity Sprawl in the Age of AI Agents and NHIs

Non-human identities (NHIs) and AI Agents including service accounts, CI/CD credentials and cloud workload identities, now eclipse human identities in enterprise identity systems by 50:1 to 100:1. Modern identity security platforms must assign identities to these assets and furthermore, apply roles, access control policies, visibility and governance in order to secure the modern enterprise.

How to Manage Unauthorized AI Tool Usage in Your Business

In only a few years, artificial intelligence (AI) has changed almost every aspect of life, and especially so in business. Today, employees are using generative AI tools to draft emails, code software, and analyze data at lightning speed. However, there is a hidden side to this productivity boost: unauthorized AI use. Many employees are bypassing official IT channels and using shadow AI applications to get their work done.

New CrowdStrike Innovations Secure AI Agents and Govern Shadow AI Across Endpoints, SaaS, and Cloud

As organizations race to adopt new AI tools, deploy AI agents, and build AI-powered software, they create new attack surfaces that traditional security controls were never designed to protect. A key example is the prompt and agentic interaction layer, which faces novel threats like indirect prompt injection and agentic tool chain attacks.

AI vs AI: Securing the Expanding Cyber Attack Surface | Mr. Anirban Mukherji at ET Studios

In this exclusive interview byte at ET Studios, Our Founder & CEO Mr. Anirban Mukherji discusses how increasing enterprise connectivity through cloud applications, third-party integrations, and remote work is exploding the enterprise cyber attack surface making identity security and access control more critical than ever. He dives into key threats like traditional ransomware, zero-day supply chain attacks, hyper-personalized AI phishing, and systemic incidents.

Your AI Isn't Broken... Your Data Is #shorts #ai

Your AI works perfectly during testing… but suddenly fails in production. Why? The problem usually isn’t the model — it’s the data. Synthetic data looks clean and structured. But real-world data is messy: typos, missing values, broken formats, and unexpected edge cases. When AI models train only on synthetic datasets, they never learn how to handle real-world complexity. In this video, we explain why synthetic data can break AI systems and how using real production data safely can make AI more reliable.