Does ChatGPT Pose Cybersecurity Risks?
Check Point Research has found that ChatGPT, the AI chatbot created by OpenAI, poses several cybersecurity risks. Since its release in November, Check Point has been experimenting with ChatGPT and researching its possible cybersecurity implications. In their tests with the software, they successfully used ChatGPT to write a phishing email and code a malicious payload.
Check Point Research first tested ChatGPT’s ability to create a “plausible” phishing email back in December 2022. The bot responded with a basic email, asking the reader to click a link to a fake login page. While ChatGPT displayed a warning that using it for this purpose might violate OpenAI’s content policy, Check Point was still able to request edits to the text to push the bot further. After some iterations, the email now directed the reader to download a Microsoft Excel file.
ChatGPT was then given the task of producing malicious code to be hidden in this Excel file. After a few more iterations, ChatGPT produced code that would automatically download malware from a specified URL upon the Excel file being opened.
In a recent study, conducted in January 2023, Check Point confirmed its cybersecurity fears. It found several examples of cybercriminals using ChatGPT to create malicious code. These were discovered on an underground hacker forum, where both experienced and unskilled hackers were sharing code they produced with ChatGPT.
In its search, Check Point found one cybercriminal experimenting with ChatGPT to recreate common malware strains. They created an “infostealer” that identified and stole potentially valuable files across a system, compressed them, and then uploaded them to a specified server.
It found another hacker creating an encryption tool, who confirmed they were assisted by ChatGPT. At first glance, the script seemed harmless, but Check Point noted that it could be "modified to completely encrypt someone's machine without any user interaction." This theoretically could be used in a ransomware attack.
However, some experts remain unconvinced by Check Point’s take on the issue. For example, former hacker Marcus Hutchins had less impressive results when he tested ChatGPT’s ability to create malware. He found that the bot could create a file encryption routine, one of the components of ransomware software, but was unsuccessful in combining it with the other elements needed to create a functional piece of malware. He believes it’s unlikely that an inexperienced hacker could create malware solely with ChatGPT.
Please, comment on how to improve this article. Your feedback matters!