AI Being Used to Generate Hard-to-Detect Phishing Emails
Abnormal Security, a company that offers email security services, recently conducted an extensive evaluation of three phishing emails suspected to be generated by AI that were blocked by its platform. The company also raised concerns that malicious actors are increasingly utilizing generative AI tools to create phishing emails that appear more authentic and realistic.
One attack covered by Abnormal Security is a clear example of this. A phishing email was sent to a company’s payroll department impersonating an employee, stating that the current account on file was deactivated and they’d like to set up a new one. This was in an effort to have the company pay wages into the scammers account. Other than the fake employee name, nothing in the email indicated an attack, as it was well-written and professional.
Abnormal Security can state with a high degree of confidence that this email was AI-generated (along with the other examples within its report). This is because the emails were run through several open-source language models, to see how likely each word was likely to be predicted given the preceding context. As nearly all the words were found to be highly likely to be chosen by an AI, more so than human-written text, these emails were deemed to be likely AI-written. These fears were confirmed by tools such as OpenAI Detector and GPTZero.
“Users have long been taught to look for typos and grammatical errors in emails to understand whether it is an attack, but generative AI can create perfectly-crafted phishing emails that look completely legitimate—making it nearly impossible for employees to decipher an attack from a real email.”
According to Dan Shiebler, the head of ML at Abnormal Security, business email compromise (BEC) actors frequently utilize templates when crafting and executing their email attacks. In an interview with VentureBeat, Shiebler explained:
"Many traditional BEC attacks often contain repetitive or commonly used content, which can be identified by email security technology that operates on predefined policies. But with generative AI tools like ChatGPT, cybercriminals are writing a greater variety of unique content, based on slight differences in their generative AI prompts. This makes detection based on known attack indicator matches much more difficult while also allowing them to scale the volume of their attacks."
John Bambenek, the principal threat hunter at Netenrich, a digital security operations company in San Jose, California, argues that the generative AI problem cannot be permanently solved by relying solely on AI. In an interview with TechNewsWorld, he emphasized the importance of adopting a behavior analytics approach to distinguish between normal and abnormal email activities.
Please, comment on how to improve this article. Your feedback matters!