Signs of a New Era: PowerShell AI Malware Marks a Cyber Evolution
New PowerShell AI malware, like TA547 using LLM-generated scripts, poses a sophisticated threat, evolving beyond social engineering to target financial organizations globally.
What Happened
TA547 has been identified using LLM generated powershell scripts to load Rhadamanthys Malware from their phishing email campaigns.
Context
The Attack
Discovered by Proofpoint [1] the emails impersonate a German retail company and deliver a malicious zipped LNK file. The LNK file triggers the PowershellScript that decodes and loads Rhadamanthys malware into memory. The comments in the powershell file contain the wordy comments characteristic of LLM generated code. Indicating TA547 and threat actors are using LLMs to write or at least edit their software and LLMs are being used for coding in languages beyond the more standard Python and Javascript. From our experiments in the appendix we see mainstream LLMs could have been used here.
The Attacker
TA557 is a financially motivated group or “Initial Access Broker” that has been active since 2017. They are globally active but have recently targeted the US, Germany, Spain, Austria and Switzerland. They often use phishing emails for initial access to deliver malicious zipped JavaScript and LNKs. Notably Remote Access Trojans, Info Stealers e.g. Lumma Stealer, Zloader, and Rhadamanthys more recently.
AI LLM Crime
Previously we’ve seen
State sponsored APT's exploring LLM tools
Social Engineering and DeepFake Scams
Dark LLM/GPTs Scams
that help with fraud, phishing etc
This is notable since it’s the next stage in an attack moving from social engineering to the scripts used for initial access.
Why this Matters to Financial Organizations
Traditional rules for malware detection which depend on ‘signatures’ can be more easily avoided with LLM generated code. Bad actors can essentially create many more variations of malware strains, making them harder to detect. It is easier than ever to create script variants, so more semantic understanding of what a script does is needed for detection.
Beyond social engineering: More sophisticated bad actors are now using LLMs for more than social engineering emails - they’re also using it to finetune infostealers. This will likely increase the effectiveness of infostealers, resulting in more compromised credentials, which will further result in fuel for the financial fraud and ATO ecosystem.
Previously we’ve seen discussions about how tools like FraudGPT enable new bad actors to enter the space and carry out attacks like fraud or phishing. In this attack we see the more organized actors are using LLMs to carry out more sophisticated attacks.
Sources:
Proofpoint -- Security Brief: TA547 Targets German Organizations with Rhadamanthys Stealer
Darkreading -- CYBERATTACKS & DATA BREACHES TA547 Uses an LLM-Generated Dropper to Infect German Orgs
Bleeping Computer -- Malicious PowerShell script pushing malware looks AI-written
CSO -- AI tools likely wrote malicious script for threat group targeting German organizations
Tech Radar
TA547 Malpedia
Appendix: Our experiments on the powershell script
We have not seen other analysts specify which script this could be, but from a quick check it looks like ChatGPT the mainstream LLM) does not have limitations that would prevent it being used to adjust this code. In our prompt we just ask it to change the comments in an example of a simple signature evasion.
Open AI
Claude
Github Copilot
Back to blog