Blog

Look Out for These Five ChatGPT Security Risks in 2023

Look Out for These Five ChatGPT Security Risks in 2023

Since OpenAI launched ChatGPT at the end of 2022, many industries have been shaken by the AI writing tool’s present capabilities and by its incredible potential for the future. While the AI-powered chatbot can produce a vast array of content in different tones and styles in a matter of seconds, it also raises several security issues that cybersecurity professionals will need to confront in 2023.

How Cybercriminals Can Use ChatGPT

The ability of ChatGPT to produce a vast array of content, from poems to essays to emails, is nothing short of extraordinary. Just as many professional writers, marketers, SEO strategists, and others will use it as one tool in an arsenal of many, cybercriminals are likely to use it as a tool for their jobs. 

Here are five ways cybercriminals can adapt ChatGPT for their nefarious purposes. 

●     Create malicious code: According to researchers at Check Point Research, hackers have already started using ChatGPT to quickly write malicious code. The ability to write code so quickly will serve to galvanize criminal activity.

●     Phishing emails: ChatGPT is technically programmed not to produce malicious content. However, by playing around with the wording of prompts, the TechCrunch staff succeeded in getting ChatGPT to produce fairly realistic-sounding phishing emails. They did so to show that cybercriminals can easily circumvent ChatGPT’s rules, at least for now.

●     Crime-as-a-service: According to Cybersecurity Hub, one of the top cyber threats of 2023 is crime-as-a-service, when bad actors offer to carry out cyber attacks as a service to those who cannot do it themselves. ChatGPT can help these bad actors carry out their services quicker and easier.

●     Fraud: One of the niftiest aspects of ChatGPT is that it can change its writing style to suit user requests. “Write a country song about cats in the style of Willie Nelson” is an example and there are many more. As this is the very nature of the platform, it is no surprise that cybercriminals can use ChatGPT to hack into an individual’s private email accounts and imitate their style of writing for any number of nefarious purposes.

●     Spreading misinformation: A recent article in Scientific American outlines many instances of ChatGPT simply producing incorrect output. In the hands of regular Joes, that may be annoying, but not life-threatening. What happens when cybercriminals or rival governments use ChatGPT to spread disinformation intentionally? Social unrest and political upheaval are a stone’s throw away, especially since the disinformation produced by ChatGPT can sound so true.

Proactive Measures to Protect Against Malicious Use of ChatGPT

Some might think that just as cybercriminals are using ChatGPT to write malicious code, cybersecurity professionals should use it to write their own code, but that is not entirely correct. Remember, ChatGPT’s content is not completely reliable, often containing factual errors. To count on its results blindly is irresponsible. 

For now, cybersecurity professionals must be aware of an increase in bad actors and take appropriate precautions. They can try to use ChatGPT to their advantage, but not rely solely on it.

Platforms allowing user input should seriously consider banning content from ChatGPT, like Stack Overflow did temporarily. Companies and individuals concerned about fraud, phishing emails, or misinformation can run questionable content through AI-writing detectors. Individuals can easily find these online, but companies should invest in more comprehensive platforms.

OpenAI is working on a cryptographic watermark that will identify AI-generated content. Google, for the moment, certainly is not a fan, but that is more because they value reliable results and fear OpenAI’s competition, not for security purposes.

The bottom line is that ChatGPT is extremely new and incredibly exciting. Like any new technology, it has great potential but also great risk. As it was only released a few months ago, it will take some time before cybercriminals realize its full potential and for cybersecurity professionals to understand how to set up bulwarks and mitigation strategies.

For additional resources and guidance, the Interfor team is here to help.