EXPEL BLOG

Expel Quarterly Threat Report volume II: attackers and AI

Quarterly Threat Report Attackers and AI

· 2 MIN READ · AARON WALTON · APR 16, 2024 · TAGS: MDR

Welcome to the second installment of our new and improved Expel Quarterly Threat Report (QTR) blog series. In volume I: Q1 by the numbers, we gave an overview of our security operations center (SOC) team’s high-level findings from the first quarter of 2024. There, we included a quick background on our QTRs, Q1 by the numbers, and our top takeaways from the quarter.

Here, our series continues with a deeper look at one of our key findings about attackers using AI to enable more sophisticated social engineering attacks. Let’s dive in.

Threat actors mostly use AI to aid in social engineering attacks.

In our Annual Threat Report, we made predictions for 2024 that (coincidentally) revolved around two main themes: first, that AI would supercharge social engineering campaigns, and second, that threat actors would inevitably stick to tried-and-true methods. One quarter in, and both of these predictions appear to ring true. Here’s how.

The incidents involving AI that we see tend to be social engineering attacks. Within our customer base, that primarily means attackers leveraging AI to create malicious YouTube videos advertising cracked software. Threat actors make these videos using AI to animate avatars to produce high-quality, highly convincing videos that urge viewers to download compromised software disguised as legitimate software from malicious sites. In most cases, these videos are posted on stolen YouTube accounts—accounts with typically over 100k followers. These videos then direct users to download malicious files.

Globally, threat actors are trending toward using AI to enact investment- and romance-themed social engineering scams. These scams often involve long-form content and scammers use AI to generate communications with the victim.

We also correctly predicted that many actors still use the same old tactics; we aren’t clearly seeing them abandon their old ways to make a hard shift to AI. Why? Likely because they don’t need to. The same tactics they’ve used in the past still work, so there’s no real motivation to change. Similarly, some of the phishing tactics we observed previously also remain consistent.

Notably, and worthy of a category of its own, the social engineering power of AI is already clearly demonstrated in deepfakes. Deepfakes are real media that’s manipulated using AI. Daniel Clayton (Expel’s VP of Operations) suggested in his predictions for 2024 “this year’s election cycle and emotive geopolitical situations provide a situation ripe for disinformation.” Indeed, we’ve seen individuals using deepfakes to influence the US 2024 Presidential Election already with fake voice calls from President Biden and fake videos of Taylor Swift supporting particular political views. Deepfakes are also being used in financial scams: for example, malicious ads used deepfake videos of influential figures to push scams, and to steal millions of dollars by impersonating key members in an organization.

How to protect your org:

Deepfakes targeting employees can significantly enhance the illusion of legitimacy when conducting a social engineering attack, resulting in greater success rates while being more challenging for security teams to identify pre-compromise. These attacks are sophisticated, and victims may be unaware they’re being exploited until after the malicious actor achieves their goals. To protect organizations from these attacks, security teams must observe deviation from normal communication channels and established processes to be able to identify unusual behavior quickly.

Next up in this series: the rise of high-risk malware. Questions? Just want to chat? Drop us a line.

Q1 QTR series quick links

Check out the other blogs in the series for more of our Q1 findings: