ChatGPT is an AI based tool created by OpenAI. The model is designed to perform a range of tasks such as its chat box feature which is praised for its human like interactions. ChatGPT was created using the GPT–3 language model. OpenAI embedded filters into the code which limits the use of the model. These limitations include the data source, chat memory and character support. ChatGPT continues to be used be used positively for debugging code and using information to present solutions to questions.
However, like many new technological advances, cybercriminals have adapted the model and code for malicious use. Released on the dark web in early July, WormGPT adapted the GPT3 model to the open source GPT-J language model created by EleutherAI in 2021.
This model was discovered online by researchers at the email security company SlashNex on July 13th. Unlike ChatGPT, the model was not created by OpenAI and is designed specifically for malevolent tasks. To navigate around the limitations placed on ChatGPT, WormGPT utilises jailbreaks. A jailbreak is a specialised section of code that forces the model to complete commands against its original design. For WormGPT, the model was adapted to gather data from hacking guides, scam emails or security forums. The AI model then uses the information from different sources to train itself in the most effect malware creation.
These adaptations have given WormGPT a variety of malicious uses. The first and most common is designing phishing emails. For example, WormGPT can be asked to produce an email to be sent out to companies or individuals with the intention of scamming them. Approximately 3.4 billion phishing emails are sent daily. Many of these emails are rejected by email servers’ spam filters or deleted and reported by email users due to an increased awareness of this type of scam. However, WormGPT has produced “remarkably persuasive and strategically cunning emails” which can surpass email filters and enter a user’s inbox. Hence, the scam is more likely to succeed.
Other worrying uses of WormGPT include assisting criminals in identifying code vulnerabilities on websites that could offer easy access into the backend of the site, enabling them to operate undetected within the target website.
After the creation of WormGPT other malicious GPT models have appeared on the dark web such as FraudGPT and PoisonGPT which were also released in July. These models are AI driven tools that have also been adapted specificity for committing crime.
However, whether or not these new AI models pose a real threat has been debated online. Many have referenced the increasing capabilities of users spotting these phishing emails and, email filters become more and more adapt at spotting and filtering these scams.
As well as this, WormGPT is not free to use, it costs $121 dollars a month which limits its users.  There have been negative reviews on the model with some commenting that the emails produced were riddled with spelling or grammatical errors. Others stated that WormGPT is “not worth a dime”.
Overall, cybercriminals will continue to exploit advances online, including AI models. The effective nature of WormGPT is still undetermined given its recent creation and limited access.
However, their success is ultimately in the hands of the reader. The public must be kept abreast of developments, new scams and phishing attempts. They should be taught how to spot a phishing emails and texts, and how to verify genuine communication attempts. Companies should make sure their websites are secure and regularly pen test for vulnerabilities.
Criminals may manage to wriggle around these issues, but victims shouldn’t be caught hook, line and sinker.
At Quintel, we offer services that can help mitigate against the potential risks that this malicious AI may present. Notably for those who may have fallen victim to phishing emails, or to assist in the verification of suspicious online communication. Performing due diligence checks on anonymous senders to help uncover and corroborate their identity. For further information please contact email@example.com.
Global Risk Investigations
92 Albert Embankment
London, SE1 7TY
+44 (0)203 948 1988