Microsoft reveals how Iran, North Korea, China, and Russia are using AI for cyber war
In a blog post, the Redmond-based company emphasized that while these techniques were still in their “early-stage,” they were neither “particularly novel nor unique.” Nevertheless, Microsoft deemed it crucial to publicly expose them. As US rivals harness large-language models to expand their network-breaching capabilities and conduct influence operations, transparency becomes essential.
For years, cybersecurity firms have utilized machine learning for defense, primarily to identify anomalous behavior within networks. However, malicious actors—both criminals and offensive hackers—have also embraced this technology. The introduction of large-language models, exemplified by OpenAI’s ChatGPT, has elevated the game of cat-and-mouse in the cybersecurity landscape.
Microsoft’s substantial investment in OpenAI aligns with its commitment to advancing AI research. The announcement coincided with the release of a report highlighting the potential impact of generative AI on malicious social engineering. As we approach a year with over 50 countries conducting elections, the threat of disinformation looms large, exacerbated by the sophistication of deepfakes and voice cloning.
Here are specific examples that Microsoft provided. The company said that it has disabled generative AI accounts and assets associated with named groups:
North Korea: The North Korean cyberespionage group known as Kimsuky has used the models to research foreign think tanks that study the country, and to generate content likely to be used in spear-phishing hacking campaigns.
Iran: Iran’s Revolutionary Guard has used large-language models to assist in social engineering, in troubleshooting software errors, and even in studying how intruders might evade detection in a compromised network….