Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

EP 1 - AI Gone Rogue: FuzzyAI and LLM Threats

In the inaugural episode of the Security Matters podcast, host David Puner dives into the world of AI security with CyberArk Labs' Principal Cyber Researcher, Eran Shimony. Discover how FuzzyAI is revolutionizing the protection of large language models (LLMs) by identifying vulnerabilities before attackers can exploit them. Learn about the challenges of securing generative AI and the innovative techniques used to stay ahead of threats. Tune in for an insightful discussion on the future of AI security and the importance of safeguarding LLMs.

CIEM Podcast - What it is. How it fits. Challenges you should know. Advice for how to get started.

This podcast is a quick but informative discussion into CIEM, it's definition, its importance, and its role within a comprehensive IAM and cybersecurity program. As organizations accelerate their migration to cloud environments, managing access and entitlements within these dynamic infrastructures becomes increasingly critical. Cloud Infrastructure Entitlements Management (CIEM) has emerged as a pivotal component in the broader Identity and Access Management (IAM) and cybersecurity landscape.

Web-Based AI Agents: Unveiling the Emerging Insider Threat

The introduction of OpenAI’s ‘Operator’ is a game changer for AI-driven automation. Currently designed for consumers, it’s only a matter of time before such web-based AI agents are widely adopted in the workplace. These agents aren’t just chatbots; they replicate human interaction with web applications, executing commands and automating actions that once required manual input.

Securing Code in the Era of Agentic AI

AI coding assistants like GitHub Copilot are transforming the way developers write software, boosting productivity, and accelerating development cycles. However, while these tools generate code more efficiently, they also introduce new risks more efficiently—potentially embedding security vulnerabilities that could lead to severe breaches down the line. What is your plan for reducing risk from the vast amount of insecure code coming through agentic AI in software development?

How to Securely Embrace the AI Revolution in Software Development

Software development is one of the most impacted workflows in the Artificial Intelligence revolution. How will you handle the AI-driven revolution in software development securely? Check out this video to see how our innovation can help you stop risks in AI and the software supply chain at the start.

What You Need to Know about the DeepSeek Data Breach

DeepSeek, founded by Liang Wenfeng, is an AI development firm located in Hangzhou, China. The company focuses on developing open source Large Language Models (LLMs) and specializes in data analytics and machine learning. DeepSeek gained global recognition in January 2025 with the release of its R1 reasoning model rivalling OpenAI's o1 model in performance but at a substantially lower cost.

Cloud invaders: Spotting compromised users before it's too late

Identities have become one of the most common ways modern threat actors gain a foothold in the cloud. From stolen credentials to overly permissive roles and privilege escalation, attackers use a range of tactics to exploit identities and use them to launch devastating breaches. Once inside your environment, they can move laterally, exploit resources, or steal sensitive data, leaving security teams scrambling to contain the damage.

7 Questions Tech Buyers Should Ask About How Their Vendors Use AI

As AI becomes an increasingly critical component in the digital supply chain, tech buyers are struggling to appropriately measure and manage their AI risk. Keeping tabs on emerging risk from the AI technology they use is hard enough. But often the most crucial AI business functions that organizations depend upon aren’t directly under their control or care, but instead are governed by the tech vendors that embed them into their underlying software.