Attackers are integrating LLMs directly into malware #cybersecurity #ai #malware #infosec #podcast
Threat actors have moved beyond using AI to speed up operations. They're now embedding large language models directly into malware.
In this Intel Chat, Matt Bromiley and Chris Luft discuss Google's Threat Intelligence Group findings on malware families like PromptFlux and PromptSteal.
These threats query LLMs mid-execution to dynamically alter behavior, obfuscate code, and generate system commands on demand.
PromptFlux uses the Gemini API to regenerate and re-obfuscate its own source code hourly. PromptSteal, attributed to Russian-backed APT28, uses the Hugging Face API to obtain system reconnaissance and data exfiltration commands in real time.
This is true dynamic malware. A piece of malware gets dropped on a system, assesses the environment, beams that information up in a prompt, writes a custom script based on what it's seeing, and executes its mission.
The challenge isn't to avoid the technology because attackers are using it. It's to wrap controls and detections around these tools, understand what they look like in your environment, and profile accordingly.
#cybersecurity #ai #malware #infosec #llm #podcast