The emergence of sophisticated AI tools has turned even mediocre hackers into potential millionaires. According to cybersecurity firm Expel, a group known as HexagonalRodent, believed to be state-sponsored by North Korea, employed these AI technologies to carry out targeted attacks on cryptocurrency developers. By leveraging the coding and design capabilities provided by US-based companies like OpenAI and Cursor, they managed to steal up to $12 million in just three months.
However, despite their reliance on AI, the group’s operations were marked by a certain amateurism. Their phishing websites and malware were filled with emojis—a tell-tale sign that these weren’t written by seasoned coders but rather by those with limited expertise. This blend of automation and inexperience raises questions about how effective such strategies can be.
Despite their limitations, AI tools have significantly boosted the capabilities of less skilled hackers. Security researcher Marcus Hutchins highlighted how unskilled operators could now perform tasks that were previously out of reach, thanks to these advanced technologies. The North Korean state's willingness to utilise such tools indicates a strategic shift towards leveraging automation for cybercrime.
The case of HexagonalRodent demonstrates the potential impact of AI in the realm of cybersecurity. While these tools can enhance the efficiency and effectiveness of cyberattacks, they also highlight the importance of robust security measures that can detect even AI-generated malware. As more hackers turn to such technologies, organisations will need to adapt their strategies accordingly.







