With just a few steps, a security researcher managed to use ChatGPT to create malware capable of stealing data from others without detection.
According to Fox News, security researcher Aaron Mulgrew from the global cybersecurity firm Forcepoint shared how he was able to create malware using prompts on ChatGPT in just a few hours.
How Does ChatGPT Create Malware?
Anyone with tech knowledge can create malware.
Although OpenAI has implemented some protective measures to prevent users from asking ChatGPT to generate malicious code, Aaron was still able to find a loophole.
He prompted ChatGPT to generate functional code with specific requests. After documenting all the individual functions, he realized he had compiled a file capable of executing data theft undetected, as sophisticated as any malware.
This malware begins by disguising itself as a screensaver application and then automatically launches on Windows devices.
As soon as it appears on the device, it scans various file types, including Word documents, images, and PDFs, while searching for any data it can find to steal from the device.
Once it has the data, it can break it into smaller parts and hide those parts within other images on the device.
To avoid detection, these images are uploaded to a Google Drive folder. The generated code becomes extremely powerful.
Concerning Findings
According to Mulgrew, he was able to refine and strengthen his code to avoid detection using simple prompts on ChatGPT.
All of this was conducted in a private experiment, and the malware did not attack anyone.
However, this discovery is alarming because Mulgrew was able to create highly dangerous malware without the need for a team of hackers. He didn’t even have to code it himself.
“I have no advanced coding experience, yet the protective measures of ChatGPT were still not strong enough to block my test,” Mulgrew warned.