Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How to Prevent Fileless and In-Memory Attacks with Aurora Endpoint Defense

See how Aurora Endpoint Defense prevents advanced memory and script-based attacks before they disrupt your business. Using Alpha AI, Aurora Endpoint optimizes threat detection and response while reducing analyst workload resulting in stronger protection and less operational strain.

How to Avoid Phishing Attacks: A Complete Guide for Users and IT Teams

Phishing remains one of the most common cyber threats, affecting users across industries and regions. It targets human behavior rather than technology, which makes it more effective than many other attack methods. Now, attackers are using advanced tools, like AI, to make phishing more effective. To know how to avoid phishing attacks, you must understand how they work and the different forms they take.

What Is API Token Hijacking? Steps to Detect and Stop the Attack

An API token is like a small digital key that tells a system that a user or an app is allowed to act in the system. When this key gets stolen, attackers act as real users and misuse the account. It’s called API token hijacking, and this issue has grown in the last few years. Most companies are not able to detect this problem in time. It’s important for IT/security teams to understand token theft to respond quickly and build stronger protection for future attacks.

React2Shell and related RSC vulnerabilities threat brief: early exploitation activity and threat actor techniques

On December 3, 2025, immediately following the public disclosure of the critical, maximum-severity React2Shell vulnerability (CVE-2025-55182), the Cloudforce One Threat Intelligence team began monitoring for early signs of exploitation. Within hours, we observed scanning and active exploitation attempts, including traffic originating from infrastructure associated with Asian-nexus threat groups.

Living off the Land - 2025 MITRE ATT&CK Enterprise Evaluations

The 2025 MITRE ATT&CK Enterprise Evaluations tested detecting malicious living-off-the-land attacks while avoiding false positives on legitimate tools. CrowdStrike delivered 100% detection and protection with zero false positives. Adversaries like Mustang Panda weaponize legitimate tools like PowerShell, WinRAR, and curl.exe while these same tools run legitimately across enterprises daily. You can't block these tools without collapsing operations.

Notorious Cybercrime Group is Now Targeting Zendesk Users

ReliaQuest warns that the cybercriminal collective “Scattered Lapsus$ Hunters” appears to be using social engineering attacks to target organizations’ Zendesk instances. This group was behind a widespread campaign earlier this year that used voice phishing attacks to compromise dozens of companies’ Salesforce portals.

Hackers hijack Google Smart Home #aisecurity #mcpserver

Building AI agents that can think, act, and adapt securely isn't easy. From prompt design to deployment, every stage brings new challenges and new risks. In this session, Bar-El Tayouri, Head of Mend AI at Mend.io, and Yehoshua (Shuki) Cohen, VP of Data and AI Evangelist at AI21 Labs, shared practical strategies for designing and defending agentic systems that actually deliver. Key topics covered: Originally recorded: October 29, 2024.

Malicious AI Tools Assist in Phishing and Ransomware Attacks

Researchers at Palo Alto Networks’ Unit 42 are tracking two new malicious AI tools, WormGPT 4 and KawaiiGPT, that allow threat actors to craft phishing lures and generate ransomware code. These tools are criminal alternatives to mainstream AI tools like ChatGPT, with no safety guardrails to prevent users from using them for malicious activities. The latest version of WormGPT offers lifetime access for $220, or a monthly fee of $50.

Indirect Prompt Injection Attacks: A Lurking Risk to AI Systems

The rapid adoption of AI has introduced a new, semantic attack vector that many organizations are ill-prepared to defend against: prompt injection. While many security teams understand the threat of direct prompt injection attacks against AI agents developed by their organizations, another more subtle threat lurks in the shadows: indirect prompt injection attacks.