Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Solving the AI Data Gap: Secure Enterprise File Access via Egnyte's MCP Server

Enterprise organizations face a fundamental challenge in AI adoption. While tools like ChatGPT and Claude offer transformative capabilities, their effectiveness is limited without secure access to organizational data. Critical business information often stays locked in secure repositories, limiting AI assistants from providing business-specific insights. Without secure access to mission-critical content, AI assistants fall short of their potential.

The State of Cloud Security in 2026, with Shira Rubinoff

What really happened in cloud security in 2025 and what should security leaders prepare for in 2026? In this session, cybersecurity leader, Shira Rubinoff breaks down the biggest cloud security challenges organizations faced in 2025, why cloud misconfigurations and IAM complexity are still major risks, and how CISOs should rethink cloud security strategy and budgeting for 2026.

runc container escape explained: Critical container vulnerabilities & host takeover risk

Containers are supposed to be isolated — but what happens when that isolation breaks? In this video, we explain critical container escape vulnerabilities in runc, the default container runtime used by Docker and Kubernetes, and why they represent a serious container security risk. Recent disclosures known as the “Leaky Vessels” vulnerabilities show how a compromised container can escape its sandbox, access the host filesystem, and potentially take over the node.

Demo: Access controls for GenAI and agentic AI

See how Cloudflare One simplifies access controls across both generative AI and agentic AI communication — all from one unified secure access service edge (SASE) dashboard. This demo highlights: Securing human-to-AI connections by as blocking or redirecting from unapproved tools and isolating AI apps to protect data (0:09) Streamlining access to MCP servers for AI-to-resource connections via Cloudflare’s MCP server portals (1:10)

Demo: Discover workforce use of shadow AI

See how Cloudflare One helps restore visibility and controls over unsanctioned use of AI tools. This demo highlights secure access service edge (SASE) capabilities including: Shadow AI reporting: Analyze how AI apps are used across your environment 0:10 Application confidence scores: Evaluate the risks posed by specific AI apps 1:10 Access controls: Allow, block, redirect, isolate, and more based on an app’s approval status 1:45.

Demo: Prevent data exposure in AI

See how Cloudflare One helps protect sensitive data when users interact with generative AI apps. This demo highlights secure access service edge (SASE) capabilities including: Data loss prevention (DLP) detections for sensitive content (e.g., PII, source code, financials) 0:22 Detections for data at rest in AI tools like ChatGPT 1:00 Guardrails for user prompts based on intent / topic to block jailbreak attempts, code abuse, PII requests, and other risky behavior 2:12.

Demo: Manage security posture of GenAI apps

See how Cloudflare One helps you manage the security posture of GenAI tools like ChatGPT, Claude, and Gemini. This demo highlights: API integrations: Available for ChatGPT, Gemini, and Claude, and most popular SaaS apps 0:18 Posture findings: Scan for misconfigurations, unauthorized activity, and other security issues 0:50 Shadow AI discovery: Find what third-party AI apps access your SaaS tools 1:15.

The Easiest Way to Get Hacked: Open Introspection. #graphql #businesslogic #apisecurity #rbi

The RBI incident (Burger King, Tim Hortons) proves that BLA often results from a cascade of simple flaws, not one complex attack. The key mistake: GraphQL Introspection was enabled. This gave the attacker the full API blueprint - the map needed to find the open registration validation flaw and execute a massive data leak. Action Item: If you have GraphQL, check your production settings now. Disable Introspection. Don't hand the attacker the map to your castle!