Microsoft Threat Intelligence has released its quarterly “Cyber Signals” report, emphasising the dual nature of AI tools as both beneficial for defenders and adversaries
Microsoft’s latest research has thrown light on how various threat actors are using generative AI into their operations, raising concerns about potential cybersecurity threats in the future. The tech giant, in collaboration with OpenAI, has thoroughly explained the usage pattern of large language models (LLMs) such as ChatGPT by five nation-state threat actors to enhance their attacks.
While Microsoft reassures that the increased use of GenAI does not pose an immediate threat to enterprises, it highlights the importance of safeguarding existing security protocols in response to recent nation-state activity. The research highlights how threat actors around the world are employing LLMs for tasks such as researching technologies and vulnerabilities, gathering information on regional geopolitics, and targeting high-profile individuals.
Despite not identifying significant attacks using LLMs, Microsoft apprehends a shift in the threat landscape to increasingly exploit AI tools. This aligns with earlier warnings from the U.K.’s National Cyber Security Centre about the expected rise in cyber threats facilitated by AI in the coming years.
Microsoft Threat Intelligence has released its quarterly “Cyber Signals” report, emphasising the dual nature of AI tools as both beneficial for defenders and adversaries. The report underscores the urgency of securely designing, deploying, and usage of AI to mitigate potential risks.
The report highlights the loopholes in traditional security measures in addressing evolving cyber threats, especially amidst a shortage of cybersecurity professionals. Microsoft warns that the integration of generative AI into cyber operations will further exacerbate these challenges, as evidenced by the observed activities of nation-state threat actors.
To illustrate how these threat actors are utilising LLMs, Microsoft identifies five groups, including Forest Blizzard (associated with the Russian government), Emerald Sleet (North Korean), Charcoal Typhoon and Salmon Typhoon (Chinese), and Crimson Sandstorm (Iranian). These groups have been observed using LLMs for various purposes such as research, technical reconnaissance, and social engineering support.
Microsoft expresses concern over the impact of AI on social engineering tactics, including identity theft and impersonation. The report warns of the increasing sophistication of AI-driven attacks, including the use of deep-fakes and voice cloning, which could undermine traditional security measures.
In response to these evolving threats, Microsoft recommends a proactive approach that includes continued employee education, stringent access controls, and the implementation of security best practices for AI tools and services. The company also advocates for transparency in the AI supply chain and proactive communication of policies and potential risks surrounding AI to employees.
While Microsoft acknowledges that the use of AI by threat actors is not new, it anticipates that AI will enhance the effectiveness and scale of future attacks. By staying vigilant and implementing robust security measures, organisations can better mitigate the evolving cyber threats facilitated by generative AI.

