Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Dark Web vs Deep Web: What's the Difference and Why CISOs Should Care

Understanding the Deep Web and Dark Web is essential for CISOs navigating today’s threat landscape. This blog breaks down their differences, the risks they pose, and how intelligence-led monitoring helps organisations detect, prevent, and respond to cyber threats before they escalate.

PhishinGit - GitHub.io pages abused for malware distribution

This blog discusses PhishinGit, a phishing campaign uncovered by CYJAX that abuses GitHub.io pages to distribute malware disguised as Adobe downloads. It explains how threat actors used Browser-in-the-Browser (BitB) techniques, Dropbox-hosted payloads, and anti-analysis JavaScript to evade detection. The blog also explores the attack chain, observed mitigations, MITRE ATT&CK mapping, and indicators of compromise (IOCs) to help organisations identify and defend against similar threats.

Engine Fault: Search engine poisoning targets airline support numbers

This blog explores a CYJAX investigation into a search engine poisoning campaign impersonating 14 global airlines, including KLM, Delta, and Lufthansa. Over 150 fake support pages were found hosting fraudulent contact numbers, tricking users into calling threat actors. The post examines how these scams exploit SEO, manipulate AI-enhanced search results, and what users can do to stay protected.

Why Human Validation Matters in Threat Intelligence

In today’s hyper-connected digital landscape, trust cannot be assumed; every system, application, and transaction is potentially vulnerable. As organisations increasingly rely on digital infrastructure, ensuring the security and reliability of these systems is critical. This is where human validation plays a pivotal role. Human validation involves proving the truth, existence, or accuracy of something by actively demonstrating it, rather than simply assuming it works as intended.

How Threat Actors Exploit Ai Tools: A CTI Perspective

Artificial Intelligence (AI) is transforming cybersecurity, but not always for the better. While organisations adopt AI to strengthen their defences, cybercriminals and nation-state actors are exploiting the same tools to launch faster, more sophisticated, and harder-to-detect attacks. From AI-powered phishing and malware evasion to deepfake-enabled fraud, adversarial AI is no longer a future risk, it’s a present-day reality.