With just a few steps, a security researcher has “asked” ChatGPT to create malware that can steal data from others without detection.
According to Fox News, security researcher Aaron Mulgrew from the global cybersecurity firm Forcepoint shared how he was able to create malware using prompts on ChatGPT in just a few hours.
How Does ChatGPT Create Malware?
Anyone with tech knowledge can create malware.
While OpenAI has implemented several protective measures to prevent users from asking ChatGPT to generate malicious code, Aaron was still able to find a loophole.
He prompted ChatGPT to generate functional code with specific requests. After documenting all the individual functionalities, he realized he had a file capable of executing data theft undetected, sophisticated enough to rival any malware.
This malware starts by disguising itself as a screensaver application and then automatically launches on Windows devices.
As soon as it appears on a device, it scans all types of files including Word documents, images, and PDFs, searching for any data it can find to steal from the device.
Once it holds the data, it can break it down into smaller parts and hide those parts within other images on the device.
To avoid detection, these images are uploaded to a Google Drive folder. The generated code becomes extremely powerful.
Concerning Findings
According to Mulgrew, he was able to refine and fortify his code to avoid detection using simple prompts on ChatGPT.
All of this activity was conducted in a private experiment, and the malware did not target anyone.
However, this finding is alarming as Mulgrew was able to create highly dangerous malware without the need for a team of hackers. He didn’t even have to create the code himself.
“I have no advanced coding experience, yet the protective measures of ChatGPT are still not strong enough to block my test,” Mulgrew warned.