NeuronWriter
1.2 C
London
Monday, February 6, 2023
NeuronWriter

Hackers Using ChatGPT AI Bot to Write Malicious Code to Steal your Data

A report says that cybercriminals are using ChatGPT, which is powered by artificial intelligence (AI) and gives answers that sound like they came from a person, to make tools that can steal your data.

Check Point Research (CPR) researchers have found the first time that cybercriminals used ChatGPT to write malicious code. In underground hacking forums, threat actors make “info stealers,” tools for encrypting information and helping with fraud. The researchers warned that cybercriminals are becoming increasingly interested in using ChatGPT to scale and teach how to do bad things.

Read More: OpenAI Working on ChatGPT Paid Version

“Cybercriminals are interested in ChatGPT. In the past few weeks, there have been signs that hackers are starting to use it to write harmful code. ChatGPT could help hackers get things done faster by giving them an excellent place to start “Check Point’s Threat Intelligence Group Manager Sergey Shykevich said.

ChatGPT can be used for good, such as to help developers write code, but it can also be used for bad. On December 29, a popular underground hacking forum got a thread called “ChatGPT – Benefits of Malware.” The thread’s author said that he was using ChatGPT to try to make malware strains and techniques that were written about in research papers and articles about common malware.

According to the research, “While this individual could be a technically oriented threat actor, these articles looked to be showcasing how less technically skilled cybercriminals can utilize ChatGPT for nefarious purposes, with concrete examples they can immediately employ.” A threat actor published a Python script on the 21st of December, highlighting the fact that it was the “first script he ever developed.”

The hacker stated that OpenAI provided him with a “good (helping) hand to finish the script with a nice scope” in response to a comment made by another cybercriminal that the style of the code is similar to that of OpenAI code. The paper noted that this might mean that potential cybercriminals who have very little to no development skills at all may be able to exploit ChatGPT to develop dangerous tools and become fully-fledged cybercriminals with technical capabilities.

Read Also: Microsoft Plans to Invest ChatGPT

According to Shykevich, “even if the tools that we analyze are quite simple, it’s only a matter of time until more sophisticated threat actors improve the manner that they use AI-based tools.” According to recent reports, OpenAI, the company that developed ChatGPT, is currently attempting to attract funding at a valuation of around $30 billion. Microsoft paid one billion dollars for OpenAI and is currently promoting ChatGPT applications as a means of finding solutions to real-world issues.

Subscribe to Our Latest Newsletter

To Read Our Exclusive Content, Sign up Now.
$5/Monthly, $50/Yearly

NordPass
u7buyut
itop_vpn

RECENT POSTS

Are Manchester United Back on the Right Path?

There has been a lot of unrest recently in...

Lata Mangeshkar Death Anniversary- Over 5,000 Songs in 36 Languages

The great vocalist Lata Mangeshkar was a formidable force...

Top 120 Stressthem Alternatives for Stress Testing Services in 2023

Stressthem is one of the best websites for IP...

How to Clean Microfiber Couch? – Discover the Easy & Effective Way

Are you looking for a way how to clean...

Top 180 Unblocked Games 77 – Most Popular Online Games in 2023

Do you spend a lot of your free time...

LIFESTYLE

BUSINESS

TECHNOLOGY

HEALTH

FEATURED STORIES