Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

What You Need to Know about the DeepSeek Data Breach

DeepSeek, founded by Liang Wenfeng, is an AI development firm located in Hangzhou, China. The company focuses on developing open source Large Language Models (LLMs) and specializes in data analytics and machine learning. DeepSeek gained global recognition in January 2025 with the release of its R1 reasoning model rivalling OpenAI's o1 model in performance but at a substantially lower cost.

How to Securely Embrace the AI Revolution in Software Development

Software development is one of the most impacted workflows in the Artificial Intelligence revolution. How will you handle the AI-driven revolution in software development securely? Check out this video to see how our innovation can help you stop risks in AI and the software supply chain at the start.

Securing Code in the Era of Agentic AI

AI coding assistants like GitHub Copilot are transforming the way developers write software, boosting productivity, and accelerating development cycles. However, while these tools generate code more efficiently, they also introduce new risks more efficiently—potentially embedding security vulnerabilities that could lead to severe breaches down the line. What is your plan for reducing risk from the vast amount of insecure code coming through agentic AI in software development?

Web-Based AI Agents: Unveiling the Emerging Insider Threat

The introduction of OpenAI’s ‘Operator’ is a game changer for AI-driven automation. Currently designed for consumers, it’s only a matter of time before such web-based AI agents are widely adopted in the workplace. These agents aren’t just chatbots; they replicate human interaction with web applications, executing commands and automating actions that once required manual input.

EP 1 - AI Gone Rogue: FuzzyAI and LLM Threats

In the inaugural episode of the Security Matters podcast, host David Puner dives into the world of AI security with CyberArk Labs' Principal Cyber Researcher, Eran Shimony. Discover how FuzzyAI is revolutionizing the protection of large language models (LLMs) by identifying vulnerabilities before attackers can exploit them. Learn about the challenges of securing generative AI and the innovative techniques used to stay ahead of threats. Tune in for an insightful discussion on the future of AI security and the importance of safeguarding LLMs.

Guarding open-source AI: Key takeaways from DeepSeek's security breach

In January 2025, within just a week of its global release, DeepSeek faced a wave of sophisticated cyberattacks. Organizations building open-source AI models and platforms are now rethinking their security strategies as they witness the unfolding consequences of DeepSeek’s vulnerabilities. The attack involved well-organized jailbreaking and DDoS assaults, according to security researchers, revealing just how quickly open platforms can be targeted.

A Phased Approach: Thoughts on EU AI Act Readiness

The European Union’s (EU) AI Act (the Act) represents landmark artificial intelligence (AI) regulation from the EU designed to promote trustworthy AI by focusing on the impacts on people through required mitigation of potential risks to health, safety and fundamental rights. The Act introduces a comprehensive and often complex framework for the development, deployment and use of AI systems, impacting a wide range of businesses across the globe.