Over the past year, there has been a noticeable increase in the speed, scope, and complexity of attacks, coinciding with the rapid advancement and acceptance of AI technology. Defenders are just starting to realize the potential of generative AI in tipping the scales in cybersecurity and outsmarting adversaries. However, it is equally crucial to understand the risks associated with the misuse of AI by threat actors. In collaboration with OpenAI, we are releasing a research report on emerging threats in the AI era, focusing on activities linked to known threat actors, such as prompt injections, attempted misuse of large language models (LLM), and fraudulent activities.
Our analysis of how threat actors are currently utilizing LLM technology suggests that attackers view AI as a productivity tool to enhance their offensive strategies. While Microsoft and OpenAI have not yet detected any significantly novel AI-enabled attack techniques, we remain vigilant in monitoring this landscape closely.
The primary goal of Microsoft’s partnership with OpenAI, as demonstrated through this research, is to promote the safe and responsible use of AI technologies like ChatGPT. We are committed to upholding ethical standards to safeguard the community against potential misuse. To achieve this, we have implemented measures to disrupt assets and accounts associated with threat actors, enhance the protection of OpenAI LLM technology and users, and establish safeguards around our models. Additionally, we are dedicated to leveraging generative AI to combat threat actors and introduce new tools like Microsoft Copilot for Security to empower defenders worldwide.
In response to the evolving technological landscape, strong cybersecurity measures are essential. For instance, the White House’s Executive Order on AI mandates rigorous safety testing and government oversight for AI systems with significant impacts on national security, economic security, public health, and safety. Our actions to enhance the security of our AI models and collaborate with partners align with the Executive Order’s call for comprehensive AI safety and security standards.
Microsoft is taking a proactive stance by announcing principles that guide our policies and actions in mitigating risks associated with the use of our AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates. These principles include identifying and taking action against malicious threat actors, notifying other AI service providers of detected threats, collaborating with stakeholders, and ensuring transparency in our efforts.
Our commitment to responsible AI innovation underscores our dedication to prioritizing safety, integrity, human rights, and ethical standards. The principles we have unveiled today build upon Microsoft’s Responsible AI practices, voluntary commitments to advance responsible AI innovation, and the Azure OpenAI Code of Conduct. By adhering to these principles, we contribute to strengthening international law and norms, in alignment with the goals of the Bletchley Declaration endorsed by 29 countries.
Furthermore, Microsoft and OpenAI’s collaborative defenses play a crucial role in safeguarding AI platforms against known and emerging threats. By tracking over 300 unique threat actors, including nation-state actors and ransomware groups, Microsoft remains vigilant in analyzing and correlating threat attributes to prevent malicious activities. Our ongoing research and collaboration with OpenAI enable us to monitor attack activities, share intelligence, and enhance our defenses to protect our customers.
As the threat landscape evolves, it is imperative to track and neutralize threats effectively. Microsoft’s collaboration with MITRE to integrate LLM-themed tactics, techniques, and procedures (TTPs) into frameworks like MITRE ATT&CK® and MITRE ATLAS™ demonstrates our commitment to countering threats in the realm of AI-powered cyber operations.
In summary, the research conducted by Microsoft and OpenAI sheds light on the evolving strategies of threat actors in leveraging AI technologies. By sharing our findings and intelligence, we aim to equip the security community with the necessary insights to counter malicious activities effectively. The appendix provided outlines LLM-themed TTPs that serve as a common taxonomy to track and counter the misuse of LLMs. Our ongoing efforts underscore our commitment to safeguarding against the misuse of AI technologies and advancing responsible AI innovation.