The other side of AI

The other side of AI

Artificial intelligence (AI) has profoundly transformed our daily lives, from the way we work and communicate, to how we approach complex problems in areas such as healthcare, education and technology. However, like any powerful tool, AI also has a dark side. As its capabilities expand, so do the opportunities for malicious actors to use it for destructive purposes. In recent years, cases have emerged where advanced AI models, such as ChatGPT, have been exploited to carry out sophisticated cyberattacks. This article explores three recent cases that illustrate how AI is being used in cybercrime strategies, highlighting the urgent need for greater controls and regulations to prevent the technology from becoming a threat rather than a solution.

Artificial intelligence (AI) has revolutionized a variety of sectors, from medicine to education and commerce. However, like any powerful tool, its misuse can bring with it worrying consequences. An example of this is the use of advanced models such as ChatGPT in malicious activities. Despite the good intentions of the developers, there are actors who have exploited the capabilities of these AIs to plan sophisticated cyberattacks. This article explores the other side of artificial intelligence and some recent cases in which it has been used for negative purposes.

First attack: Chinese activists and the use of 'SweetSpecter'

The first significant attack involving ChatGPT was perpetrated by a group of Chinese activists. This attack targeted several Asian governments and was executed using a technique known as spear-phishing, specifically a variant called 'SweetSpecter'. This method consists of sending emails containing ZIP archives with a malicious file inside. Once downloaded and opened by the user, this file is capable of infecting the system.

Most alarmingly, OpenAI engineers discovered that 'SweetSpecter' was created through the interaction of several accounts that used ChatGPT to write the code and discover vulnerabilities. Activists leveraged the model's ability to generate complex solutions, demonstrating how AI, in the wrong hands, can be a powerful tool for cybercrime.

Second attack: CyberAv3ngers and password theft

The second AI-optimized attack was carried out by a group known as CyberAv3ngers, based in Iran. This group used ChatGPT to identify and exploit vulnerabilities in devices running macOS. Their goal was to steal users' passwords and access sensitive personal information. ChatGPT's accuracy and ability to analyze code and systems allowed CyberAv3ngers to hone their attack, underscoring the potential threat posed by using AI for these types of crimes.

Third attack: Storm-0817 and the development of malicious software

The third attack was planned by a group also of Iranian origin known as Storm-0817. This time, the group used ChatGPT to develop malicious software designed specifically for Android devices. This malware was able to steal contact lists, access call logs and the browser history of victims. The combination of technical know-how and AI assistance allowed Storm-0817 to create a sophisticated and powerful threat in a short period of time.

These recent cases reveal that artificial intelligence, although full of benefits, can also be used for malicious purposes. The ability of models like ChatGPT to generate code, discover vulnerabilities and optimize cyberattacks has raised alarm bells in the technology and computer security community. This panorama highlights the importance of creating regulations and developing protection technologies to mitigate the risks associated with the misuse of artificial intelligence. As advances continue, it is essential to balance the development of these tools with appropriate security measures, ensuring that AI remains a force for good rather than an emerging threat.

This is, without a doubt, the other side of artificial intelligence, one that demands attention and coordinated action to protect users and societies from its potential dangers.

Related products