AI And The New Wave Of Threats It Poses To Cybersecurity

Artificial intelligence has advanced by leaps and bounds in recent years, leading to the promised AI revolution now being a reality. Given its immense potential, it can contribute a lot of good to various applications. However, it can also pose a risk, especially concerning cybersecurity, where it can be said to be akin to a double-edged sword.

According to McKinsey, AI, alongside other advanced tools like automation and machine learning, will significantly expedite cyberattacks over the next several years and dramatically change the threat landscape. In fact, we are already seeing these risks come to fruition today and making headlines in the news.

Attackers are leveraging the technology to improve their phishing and fraud techniques, with the recent leak of Meta’s 65 billion parameter language model undoubtedly aiding the process. Moreover, regular users are recklessly inputting business-sensitive data into AI-based services, leaving their organisation’s IT security teams scrambling to control the issue.

In short, the misuse of AI, particularly in the hands of cybercriminals, is becoming a growing concern, leading to cybersecurity certifications being high in demand. The AI revolution is moving rapidly and has created the following major issues that cybersecurity experts must face. 

Asymmetry in the attacker-defender dynamic

It is predicted that hackers will be faster in adopting and engineering AI than defenders and soon be capable of launching more sophisticated attacks backed by generative AI at an unprecedented scale, all at a low cost.

The first to benefit from this technology will largely be on the social engineering front. These attacks require manual effort to create text, voice, and images. But with the help of AI, they can now be easily automated. Furthermore, attackers can leverage AI to better their malicious software and launch new attacks at scale. For instance, they can quickly and more easily generate polymorphic code for their malware, making them undetectable by signature-based systems.

AI and Security: Further Erosion of social trust

Thanks to social media, misinformation can now spread like wildfire. A recent poll by the University of Chicago Pearson Institute/AP-NORC states that 91% of adults claim that this spread is becoming a bigger problem, with nearly half worrying that they may have contributed to the issue as well. Introducing a machine into the equation erodes social trust faster and cheaper.

Today’s AI/ML systems are based on large language models with inherently limited knowledge. Thus, when they do not know the answer to a question, they make it up, which is an unintended consequence called “hallucinating”. Thus, a lack of accuracy is a serious problem whenever people search for legitimate answers.

Ultimately, this betrays human trust and creates dramatic mistakes with similarly dramatic consequences. For instance, OpenAI’s ChatGPT wrongly identified a mayor in Australia as being jailed for bribery when in reality, he was the whistleblower in a case, leading to him now readying a lawsuit against the company.

Unprecedented attacks on AI systems

Over the next decade, experts predict that a new generation of cyberattacks will focus on AI/ML systems by influencing the classifiers they use to control outputs and bias models. They can thus create malicious models that seem no different from the real ones, which can cause significant harm depending on how they are used.

In addition, prompt injection attacks will be more common, as demonstrated by a Stanford University student that convinced Microsoft’s Bing Chat to reveal its internal directives just a day after it launched.

Hackers may soon start an arms race with adverse ML tools that deceive AI systems, extract sensitive data from them, or poison their data. As AI increasingly generates more of the code used in software applications, attackers can exploit the inherent vulnerabilities in these systems to compromise applications at scale.

Conclusion

AI innovation is set to proceed unabated, and suggesting to stifle is disingenuous, especially since hackers will certainly not abide by such a request. Therefore, defenders must strive to explore innovative approaches to cybersecurity to see improvements in behavioural analytics and threat hunting.

Boost your cybersecurity skills and quickly acclimate to our AI-powered future at BridgingMinds, where we offer SSG training courses, including the SF – Information Systems Security Professional (CISSP) certification considered the gold standard for IT security professionals. Our professional courses cover beyond just cybersecurity but also include PMP virtual training, cloud, DevOps, Agile, and more. Feel free to contact us at any time to learn more about our courses!

×