Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How Reach Security Automates Remediation and Prevents Configuration Drift

From identification to remediation to drift management. When Reach flags an exposure, it doesn’t stop there. It shows exactly how much risk you’ll reduce by fixing it — and what impact it’ll have on users. In this short demo, CRO Jared Phipps walks through how Reach:︎ Quantifies residual risk reduction (e.g., 62%, 91%, etc.)︎ Weighs that against user impact︎ Guides teams through the remediation process︎ Integrates with Jira or other ticketing systems to track fixes︎ Monitors configurations to prevent drift and maintain baselines.

From Model Drift to API Exploitation: The Next Challenge in AI Security

From Model Drift to API Exploitation: The Next Challenge in AI Security In this clip from "Securing AI Part 4: The Rising Threat of Hidden Attacks in Multimodal AI," Diptanshu Purwar and Madhav Aggarwal summarize why external guardrails are the only sustainable defense against the new wave of AI exploitation. Jamison Utter then sets the stage for the next topic in the series: securing the fundamental protocols and APIs that AI agents rely on.

OWASP Top 10 Business Logic Abuse: What You Need to Know

Over the past few years, API security has gone from a relatively niche concern to a headline issue. A slew of high-profile breaches and compliance mandates like PCI DSS 4.0 have woken security teams up to the reality that APIs are the front door to their data, infrastructure, and revenue streams. OWASP recently published its first-ever Business Logic Abuse Top 10 List; a clear indication that the industry is taking API security and all its nuances seriously.

The Critical Inflection Point: Navigating Apex Risks from AI to Stolen Credentials

The global cyber threat landscape has accelerated beyond traditional defense, reaching a critical inflection point. Today, organizations are no longer battling isolated attackers; instead, they are confronting industrialized, financially motivated cyber syndicates that leverage cutting-edge technologies to maximize their impact. Moreover, the rise of AI in Cybersecurity has created both opportunities and threats.

Why Every Tech Company is Talking About OWASP for AI (and You Should Too)

AI is changing everything—but with innovation comes new risks. In this episode of AI on the Edge, we dive deep into OWASP's Top 10 for Large Language Models with security leader Steve Wilson (Exabeam). Discover why every tech company is suddenly talking about LLM security and how you can stay ahead. Inside this episode: Why traditional security doesn’t work for AI Learn from Steve’s new book The Developer’s Playbook for LLM Security and get actionable tips to protect your AI systems.

Agentic Controls for an Agentic World: Why Traditional Security Can't Keep Up

AI agents now move data, collaborate, and make decisions at machine speed — millions of actions per second. But our entire security architecture was built for humans, not for autonomous AI. In this new Agentic World, every action is faster, every breach more invisible, and every compliance gap more dangerous. Protecto introduces Agentic Controls — intelligent, context-aware CBAC Agents that live inside AI workflows. They understand policies written in plain English, enforce zero-trust decisions before data ever leaves its boundary, and protect privacy across industries.

Find the Fixer: The AI Agent Bringing Order to Ownership

Assigning remediation tasks across an enterprise organization can feel like navigating a maze of inconsistent tags, overlapping teams, and unclear ownership. It’s one of the most persistent operational challenges in vulnerability and exposure management, and one of the biggest barriers to speed. Each scanner and cloud platform comes with its own tagging logic. One system uses ProductOwner, another productowner. Some tags are outdated, others duplicated, and many have no clear purpose.

5 Critical LLM Privacy Risks Every Organization Should Know

Large language models take in unstructured data. They transform it into context, embeddings, and answers. That journey touches raw files, vector stores, model logs, and third-party services. Traditional privacy programs focus on databases and forms. LLMs push risk to the edges. The riskiest moments are when you ingest messy content, when your system retrieves chunks to support an answer, and when an agent with tool access is tricked into over-sharing.

"Trust in AI Starts with Transparency | Sebastian Goodwin (Autodesk) x Reach Security"

Trust in AI starts with transparency. In our recent conversation, “No Time to Drift,” Sebastian Goodwin, Chief Trust Officer at Autodesk, shares how his team is putting that principle into practice — by creating AI Transparency Cards. Think of them like nutrition labels for AI: clear, consistent, and designed to help customers understand what’s inside. Each one outlines what the model does, how it’s trained, safeguards in place, and more.