Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Anatomy of a Vishing Attack: Technical Indicators IT Managers Need to Track

If your organization hasn’t encountered a vishing attack yet, it’s probably only a matter of time. Vishing, or voice phishing, is a sophisticated type of social engineering that adds a whole new dimension to common scams. Rather than emails or text messages, threat actors employ phone calls or online voice calls to carry out vishing schemes. Particularly savvy attackers can even copy a real person’s voice to deceive, coerce, or manipulate potential victims.

Understanding the LLM Mobile Landscape in Enterprise Technology

Mobile security has always been complex, but LLM technology has added a whole new dimension to the field. Behind every popular generative AI (genAI) tool is a comprehensive large language model (LLM) that provides data and parses queries in natural language. When used responsibly, LLMs can be useful tools for ideation and content generation. In the wrong hands, though, LLMs can help threat actors supercharge their social engineering scams.

The Automated Con: Mitigation Tactics for Identifying Deepfake and LLM-Assisted Impersonation

Over the past few years, artificial intelligence (AI) has supercharged deepfake technology. Creating a fake picture, video, or audio recording of a person used to require a considerable investment of both time and technical skills. Now, generative AI (genAI) platforms can whip up convincing deepfakes in minutes, using only a single photo or short voice clip as a starting point.

LLM Security Checklist: Essential Steps for Identifying and Blocking Jailbreak Attempts

If your organization uses a private large language model (LLM), then it’s time to start thinking about countermeasures for jailbreaking. A jailbroken LLM can lead to leaked information, compromised devices, or even a large-scale data breach. Even more troubling: Jailbreaking LLMs is often as simple as feeding them a series of clever prompts. If your customers can access your LLM, your potential risk is even higher.