US adversaries, notably Iran and North Korea, and to a lesser extent Russia and China, are starting to utilize generative artificial intelligence for orchestrating offensive cyber activities, as per Microsoft’s statement on Wednesday.
According to Microsoft, in collaboration with OpenAI, they have identified and thwarted numerous threats that employed AI technology developed by the adversaries. These techniques were described as “early-stage” and not particularly innovative, but Microsoft emphasized the significance of publicly exposing them, especially as US rivals leverage large-language models to enhance their network intrusion capabilities and conduct influence operations.
While cybersecurity firms traditionally use machine learning for defense purposes, malicious actors, including criminals and offensive hackers, have also adopted these technologies. The emergence of large-language models, spearheaded by OpenAI’s ChatGPT, has intensified this technological cat-and-mouse game.
Microsoft’s substantial investment in OpenAI was underscored in the announcement, coinciding with a report highlighting the potential of generative AI to bolster malicious social engineering tactics, leading to more sophisticated deepfakes and voice cloning. This poses a significant threat to democratic processes, particularly in a year with elections scheduled in over 50 countries, exacerbating the spread of disinformation.
Microsoft shared specific instances where generative AI was employed by adversarial groups, resulting in the deactivation of their accounts and assets:
- The North Korean group Kimsuky utilized these models to gather intelligence on foreign think tanks studying the country and to create content for spear-phishing campaigns.
- Iran’s Revolutionary Guard leveraged large-language models for social engineering, software troubleshooting, and studying network intrusion evasion tactics, including crafting phishing emails targeting various groups.
- The Russian GRU unit Fancy Bear used the models to research satellite and radar technologies relevant to the conflict in Ukraine.
- The Chinese group Aquatic Panda explored how large-language models could enhance their technical operations across different sectors and regions.
- Maverick Panda, another Chinese group with a history of targeting US defense contractors, evaluated these models for sourcing sensitive information on various topics.
OpenAI, in a separate blog post, mentioned that their GPT-4 model chatbot currently offers limited capabilities for malicious cybersecurity activities beyond what non-AI tools can achieve. However, cybersecurity experts anticipate advancements in this area.
The director of the US Cybersecurity and Infrastructure Security Agency highlighted the dual challenges posed by China and artificial intelligence, emphasizing the need for secure AI development. Critics have raised concerns about the rapid deployment of large-language models like ChatGPT, suggesting a lack of focus on security during their creation.
Overall, the integration of AI and large-language models in cyber operations is expected to evolve into a potent tool for nation-state militaries, potentially altering the dynamics of modern warfare.