Chat and conversational bots powered by artificial intelligence, like OpenAI’s ChatGPT and Google’s Sparrow, have gained much prominence over the last year and offer incredible transformative potential. However, they also present unprecedented security threats that cannot be ignored.
Read on as we get down to brass tacks on chatbot security, how hackers are using them to conduct malicious activity, and best practices on safeguarding against this novel risk to cybersecurity.
Why chatbots present a new opportunity for threat actors
Although the advancements in chatbots offer many valuable opportunities from a business perspective, cybercriminals are now also tapping into their potential for their uses. This is because chatbots significantly lower the barrier of gaining initial access to networks as it removes much of the knowledge and skill required from the equation when carrying out attacks.
Reports of this issue are backed by recent research which discovered threat actors leveraging ChatGPT to build dark websites, malware, and other tools for their attacks, which means they are finding ways to get around the usage restrictions of these services. This means that with AI chatbots empowering threat actors with little to no technical expertise to develop malicious code, create compelling phishing emails, fake website landing pages, and much more, one can only imagine what a skilled hacker can do with the technology.
And while security fixes continue to roll out to tackle these risks associated with chatbots, they still retain their potential for use in cyberattacks. Since it is only a matter of time before chatbots and other AI-related systems get fully adopted by organizations, it only makes sense to get employees up to speed on AI with CISSP training and other programs to enable in-house risk management.
How ChatGPT Is used for cyberattacks
1. Phishing attacks
Phishing via email continues to be one of, if not the most popular way for hackers to conduct credential harvesting attacks and gain initial network access. One of the most common ways to detect phishing emails has been to check for punctuation and spelling errors, but with a chatbot’s help, hackers can now polish their text and make it more elaborate, convincing, and appear as if a real person made it.
Apart from emails, chatbots also excel at generating scam-like messages about false giveaways and the like to trick victims. But despite the limitations that prevent the chatbot from performing such functions, threat actors can easily circumvent them with the right wording and continue crafting socially-engineered emails that are far more believable than those they make themselves.
2. Low-sophistication attacks
Hackers are sparing no effort in seeing how they can maximize ChatGPT’s capabilities for their malicious purposes, resulting in an increase in the sophistication and frequency of attacks as phishing emails and code writing is now more accessible with the chatbot’s help.
The right prompt to ChatGPT can create many types of code, ranging from encryptors and decryptors to information stealers that use popular encryption cyphers. On top of that, threat actors are already experimenting with using ChatGPT to build dark web marketplaces.
Meanwhile, cybersecurity researchers are doing their own testing on the security risks of ChatGPT to assess its limitations. They have pointed out the risk of hackers using the technology to create polymorphic malware, i.e. a more advanced type of malware that can mutate and is thus harder to detect and mitigate. And so, with chatbots being widely accessible to anyone, it is only a matter of time before there is a spike in the weaponization of the technology and the damage it can cause.
3. Dissemination of false information
Most chatbots made public to date have either mistakenly provided answers that seemed accurate but were factually incorrect or directly manipulated to output false information. For instance, ChatGPT security processes have no verification process to check whether their outputs are correct.
Therefore, chatbots can potentially give online “trolls, radical groups, and nation-state threat actors the capability to create volumes of misinformation that they can spread all over the internet using bot accounts to drive their agenda.
Cybersecurity recommendations
Current cybersecurity practices still apply when it comes to staying secure against AI-based attacks, as hackers only use chatbots to supplement their attacks. Furthermore, deploying NGAV and EDR on all organizational endpoints is highly recommended to aid in detecting suspicious behavior.
Despite their impressive capabilities, ChatGPT and similar services lack a high level of situational awareness in terms of verifying the information it gets from the web and detecting scam and phishing functions. Most of the potential chatbot security solutions to be implemented in the future are currently based on applications and software that detects whether a piece of writing is made by a real person or a chatbot. However, ensuring these solutions have a high level of accuracy will take some time. Still, they will be extremely valuable in helping shut down chatbot security risks once they are ready.
Conclusion
Although AI chatbots like ChatGPT can bring many positive benefits to businesses, organizational leaders must remain vigilant about the security and legal risks they pose. As with every trending technology, it is vital to consider and prepare for these potential pitfalls before anything else. Businesses should also be aware of the emerging cybersecurity trends to look out for.
Get on the fast track of chatbot security by signing up for cybersecurity courses here at BridgingMinds. We offer highly sought-after certifications and professional courses, such as SF – Information Systems Security Professional (CISSP) certification, CompTIA Security+, and SF – Certified Ethical Hacker (CEH), that help you take the next big step in your career. Apart from cybersecurity, we also provide courses on project management, DevOps, Agile, and much more. Feel free to contact us at any time for more information and scheduling details.