New Malicious AI Tool FraudGPT For Sale
The cyber threat landscape has taken a sinister turn in recent weeks with the emergence of AI tools designed explicitly for malicious activities. According to recent findings by the Netenrich threat research team, a new generative AI-driven model known as “FraudGPT” has been making waves on the Dark Web and Telegram Channels since July 22nd.
Netenrich security researcher Rakesh Krishnan said that FraudGPT “is an AI bot, exclusively targeted for offensive purposes, such as crafting spear phishing emails, creating cracking tools, carding, etc. The tool is currently being sold on various Dark Web marketplaces and the Telegram platform.”
The tool's subscription fees start at $200 per month, with extended options available at $1,000 for six months and $1,700 for a full year. Using the online alias "CanadianKingpin," the developer behind this malicious tool claims to have amassed over 3,000 confirmed sales and reviews, highlighting its rapid adoption in the criminal underworld.
Krishnan explains that FraudGPT can generate highly deceptive content, leading recipients to click on malicious links with a high level of confidence. This capability becomes a critical asset for threat actors orchestrating business email compromise (BEC) phishing campaigns, posing a significant risk to organizations worldwide.
The features boasted by FraudGPT are deeply troubling for the cybersecurity community. It includes the ability to write malicious code, create undetectable malware, discover non-VBV (Verified by Visa) bins, generate phishing pages, build hacking tools, identify targetable sites, groups, and markets, and even create scam pages and letters.
As the creators of these malicious tools leverage the power of generative AI technology for their own gain, the challenge of countering their activities continues to escalate. While ethical safeguards are implemented in legitimate AI applications, cybercriminals find ways to re-implement the technology without those restrictions, making it imperative for organizations to bolster their defenses.
Krishnan continued, "As time goes on, criminals will find further ways to enhance their criminal capabilities using the tools we invent. While organizations can create ChatGPT (and other tools) with ethical safeguards, it isn’t a difficult feat to reimplement the same technology without those safeguards.”
FraudGPT isn't the first instance of a malicious AI tool. WormGPT, another AI-driven chatbot, surfaced on the Dark Web on July 13th, 2023. Like FraudGPT, cybercriminals can easily automate the creation of highly convincing fake emails personalized to the recipient, among other illegal acts.
Despite these alarming developments, cybersecurity professionals remain committed to combating adversarial AI threats. They also stress the need for organizations to adopt an in-depth defense strategy. Rapid detection of fast-moving threats, such as phishing attacks, is crucial to prevent them from evolving into more destructive cyber incidents, like ransomware attacks or data exfiltration.
Please, comment on how to improve this article. Your feedback matters!