Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Guarding open-source AI: Key takeaways from DeepSeek's security breach

In January 2025, within just a week of its global release, DeepSeek faced a wave of sophisticated cyberattacks. Organizations building open-source AI models and platforms are now rethinking their security strategies as they witness the unfolding consequences of DeepSeek’s vulnerabilities. The attack involved well-organized jailbreaking and DDoS assaults, according to security researchers, revealing just how quickly open platforms can be targeted.

Gcore Radar report reveals 56% year-on-year increase in DDoS attacks

Gcore, the global edge AI, cloud, network, and security solutions provider, today announced the findings of its Q3-Q4 2024 Radar report into DDoS attack trends. DDoS attacks have reached unprecedented scale and disruption in 2024, and businesses need to act fast to protect themselves from this evolving threat. The report reveals a significant escalation in the total number of DDoS attacks and their magnitude, measured in terabits per second (Tbps).

What is a man in the middle attack? Definition & examples

A Man-in-the-Middle (MitM) attack occurs when a cybercriminal secretly intercepts and manipulates communications between two parties who believe they are interacting directly. It is currently one of the most deceptive and dangerous cyber threats. Such attacks often lead to data theft, unauthorized access and compromised privacy, to name a few consequences.

The AI Shared Responsibility Model: Who's Job Is It Anyway?

In this episode of Into the Breach, James Purvis and Filip Verloy explore the AI Shared Responsibility Model, a framework introduced by Microsoft. They break down the roles and responsibilities of cloud providers, model providers, and customers in securing AI-powered environments. From understanding the unique challenges of generative AI tools like CoPilot to the importance of proactive data governance, this discussion offers practical insights into navigating AI security today and in the future.

A Phased Approach: Thoughts on EU AI Act Readiness

The European Union’s (EU) AI Act (the Act) represents landmark artificial intelligence (AI) regulation from the EU designed to promote trustworthy AI by focusing on the impacts on people through required mitigation of potential risks to health, safety and fundamental rights. The Act introduces a comprehensive and often complex framework for the development, deployment and use of AI systems, impacting a wide range of businesses across the globe.