Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

DarkSword: Known Threats. Known Protection. Complete Visibility.

In moments. No warning. No trace. Total takeover. In March 2026, a new breed of mobile threat emerged: DarkSword. This sophisticated iOS exploit chain doesn’t need a phishing link or a malicious app download. Just one visit to a compromised website is enough to expose your entire enterprise. In this video, we dissect the DarkSword attack path—from the initial Safari iframe encounter to the kernel-level takeover—and show you how the threat disappears before most security teams even know it’s there.

Mobile Threat Report Briefing

In this video, David Richardson, Product CTO at Lookout, provides a strategic overview of the evolving mobile threat landscape based on Q3 2025 global enterprise data. Key Insights: Dominant Attack Vectors: Mobile phishing and social engineering remain the most significant threat categories. Attackers are increasingly using AI-powered tools to craft authentic-looking messages and conduct deep research for highly targeted attacks.

Smishing AI

Cybercriminals are evolving—and so are their tactics. Smishing, or SMS phishing, has become one of the fastest-growing mobile threats. With AI, attackers can now create convincing, personalized messages in seconds—removing language barriers and making scams harder than ever to detect. That’s where Lookout Smishing AI comes in. Our advanced AI-powered detection goes beyond scanning for malicious links. It identifies the intent behind every message—stopping social engineering attacks before they reach you. Whether there’s a URL or not, Lookout keeps your mobile workforce protected.

The Sword Has Been Drawn: What DarkSword's Expansion in the Wild Means for Mobile Security and the Enterprise

The last few weeks have marked a chaotic turning point in the mobile threat landscape. We’ve seen mass exploitations across numerous iOS versions by multiple threat actors, driven by sophisticated exploit chains like Coruna and now DarkSword. What makes these threats different is not just their activity, but their trajectory. Until recently, these capabilities were expensive, highly secretive, and limited to a small number of advanced actors. Now, that dynamic has shifted rapidly.

Lookout Expands Protection Following Google's Disruption of the IPIDEA Proxy Network

Last week, Google’s Threat Intelligence Group announced the disruption of IPIDEA, one of the largest and most abused residential proxy networks observed in the wild. IPIDEA quietly turned millions of consumer devices into proxy exit nodes, enabling cybercrime, espionage, and botnet activity—while putting users and enterprises at risk. At Lookout, we acted immediately.

Understanding the LLM Mobile Landscape in Enterprise Technology

Mobile security has always been complex, but LLM technology has added a whole new dimension to the field. Behind every popular generative AI (genAI) tool is a comprehensive large language model (LLM) that provides data and parses queries in natural language. When used responsibly, LLMs can be useful tools for ideation and content generation. In the wrong hands, though, LLMs can help threat actors supercharge their social engineering scams.

Anatomy of a Vishing Attack: Technical Indicators IT Managers Need to Track

If your organization hasn’t encountered a vishing attack yet, it’s probably only a matter of time. Vishing, or voice phishing, is a sophisticated type of social engineering that adds a whole new dimension to common scams. Rather than emails or text messages, threat actors employ phone calls or online voice calls to carry out vishing schemes. Particularly savvy attackers can even copy a real person’s voice to deceive, coerce, or manipulate potential victims.

The Automated Con: Mitigation Tactics for Identifying Deepfake and LLM-Assisted Impersonation

Over the past few years, artificial intelligence (AI) has supercharged deepfake technology. Creating a fake picture, video, or audio recording of a person used to require a considerable investment of both time and technical skills. Now, generative AI (genAI) platforms can whip up convincing deepfakes in minutes, using only a single photo or short voice clip as a starting point.

LLM Security Checklist: Essential Steps for Identifying and Blocking Jailbreak Attempts

If your organization uses a private large language model (LLM), then it’s time to start thinking about countermeasures for jailbreaking. A jailbroken LLM can lead to leaked information, compromised devices, or even a large-scale data breach. Even more troubling: Jailbreaking LLMs is often as simple as feeding them a series of clever prompts. If your customers can access your LLM, your potential risk is even higher.

Prompt Injection: The Hidden Threat Hijacking Your LLMs (and How to Stop It)

Generative AI is rapidly transforming the way we work. The large language models (LLMs) that power tools like ChatGPT and Claude are immensely powerful, capable of providing us with research data, detailed insights, and even deep analysis of documents and data sets, all performed through simple, text-based prompts. However, these prompts have unfortunate side effects for the IT professionals assigned to protect sensitive and proprietary data from cyber attacks.