Hackers Using ChatGPT AI Bot to Write Malicious Code to Steal your Data
A report says that cybercriminals are using ChatGPT, which is powered by artificial intelligence (AI) and gives answers that sound like they came from a person, to make tools that can steal your data.
Check Point Research (CPR) researchers have found the first time that cybercriminals used ChatGPT to write malicious code. In underground hacking forums, threat actors make “info stealers,” tools for encrypting information and helping with fraud. The researchers warned that cybercriminals are becoming increasingly interested in using ChatGPT to scale and teach how to do bad things.
Read More: OpenAI Working on ChatGPT Paid Version
“Cybercriminals are interested in ChatGPT. In the past few weeks, there have been signs that hackers are starting to use it to write harmful code. ChatGPT could help hackers get things done faster by giving them an excellent place to start “Check Point’s Threat Intelligence Group Manager Sergey Shykevich said.
ChatGPT can be used for good, such as to help developers write code, but it can also be used for bad. On December 29, a popular underground hacking forum got a thread called “ChatGPT – Benefits of Malware.” The thread’s author said that he was using ChatGPT to try to make malware strains and techniques that were written about in research papers and articles about common malware.
According to the research, “While this individual could be a technically oriented threat actor, these articles looked to be showcasing how less technically skilled cybercriminals can utilize ChatGPT for nefarious purposes, with concrete examples they can immediately employ.” A threat actor published a Python script on the 21st of December, highlighting the fact that it was the “first script he ever developed.”
The hacker stated that OpenAI provided him with a “good (helping) hand to finish the script with a nice scope” in response to a comment made by another cybercriminal that the style of the code is similar to that of OpenAI code. The paper noted that this might mean that potential cybercriminals who have very little to no development skills at all may be able to exploit ChatGPT to develop dangerous tools and become fully-fledged cybercriminals with technical capabilities.
Read Also: Microsoft Plans to Invest ChatGPT
According to Shykevich, “even if the tools that we analyze are quite simple, it’s only a matter of time until more sophisticated threat actors improve the manner that they use AI-based tools.” According to recent reports, OpenAI, the company that developed ChatGPT, is currently attempting to attract funding at a valuation of around $30 billion. Microsoft paid one billion dollars for OpenAI and is currently promoting ChatGPT applications as a means of finding solutions to real-world issues.