Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Latest Posts

Another Perspective on ChatGPT's Social Engineering Potential

We’ve had occasion to write about ChatGPT’s potential for malign use in social engineering, both in the generation of phishbait at scale and as a topical theme that can appear in lures. We continue to track concerns about the new technology as they surface in the literature.

[Heads Up] The New FedNow Service Opens Massive New Attack Surface

You may not have heard of this service planned for July 2023, but it promises a massive new social engineering attack surface. This is from their website: "About the FedNowSM Service. The FedNow Service is a new instant payment infrastructure developed by the Federal Reserve that allows financial institutions of every size across the U.S. to provide safe and efficient instant payment services.

Phishing for Credentials in Social Media-Based Platform Linktree

Social media is designed of course to connect, but legitimate modes of doing so can be abused. One such case of abuse that’s currently running involves Linktree, a kind of meta-medium for social media users with many accounts. If you’re unfamiliar with Linktree, which, we stress, is a legitimate service, here’s how the company describes what it will let you do.

Nearly One-Half of IT Pros are Told to Keep Quiet About Security Breaches

At a time when cyber attacks are achieving success in varying degrees and IT pros are keeping quiet about resulting breaches, there is one specific type of attack that has them most worried. Despite us all working in IT at a time where the sharing of threat data is at its highest, there is still the notion that organizations don’t want the public finding out about data breaches for fear of the repercussions to the company’s revenue and reputation.

OpenAI Transparency Report Highlights How GPT-4 Can be Used to Aid Both Sides of the Cybersecurity Battle

The nature of an advanced artificial intelligence (AI) engine such as ChatGPT provides its users with an ability to use and misuse, potentially empowering both security teams and threat actors alike. I’ve previously covered examples of how ChatGPT and other AI engines like it can be used to craft believable business-related phishing emails, malicious code, and more for the threat actor.

More Companies with Cyber Insurance Are Hit by Ransomware Than Those Without

In an interesting twist, new data hints that organizations with cyber insurance may be relying on it too much, instead of shoring up security to ensure attacks never succeed. Cyber insurance should be seen as an absolute last resort and shouldn’t be seen as a sure thing (in terms of a claim payout).

Phishing Email Volume Doubles in Q1 as the use of Malware in Attacks Slightly Declines

New data shows that cybercriminals started this year off with a massive effort using new techniques and increased levels of attack sophistication. According to cybersecurity vendor Vade’s Q1 2023 Phishing and Malware Report, the number of phishing attacks in Q1 this year reached the highest total since 2018. While January represented the lion’s share of Q1 phishing volume (approximately 87%), Vade detected over 562 million phishing emails.

That Email Isn't from the New Jersey Attorney General

Earlier this month, state employees in the US state of New Jersey began receiving emails that falsely represented themselves as originating with the state’s attorney general. “At first blush, the communiques appeared to come from the state Attorney General's Office and sported a convincing njoag.gov domain.

Guarding Against AI-Enabled Social Engineering: Lessons from a Data Scientist's Experiment

The Verge came out with an article that got my attention. As artificial intelligence continues to advance at an unprecedented pace, the potential for its misuse in the realm of information security grows in parallel. A recent experiment by data scientist Izzy Miller shows another angle. Miller managed to clone his best friends' group chat using AI, downloading 500,000 messages from a seven-year-long group chat, and training an AI language model to replicate his friends' conversations.