On 16–17 November, a two-day capacity-building workshop on ‘Responsible AI for Peace and Security’ was organized by SIPRI and the United Nations Office for Disarmament Affairs (UNODA) for a specific group of STEM students. The primary objective of the initial workshop in a series of four was to provide emerging artificial intelligence (AI) practitioners with insights into addressing the potential risks associated with civilian AI research and innovation on global peace and security. The workshop took place in Malmö, Sweden, in partnership with Malmö University and Umeå University, attracting 24 participants from 17 countries, including Australia, Bangladesh, China, Ecuador, Finland, France, Germany, Greece, India, Mexico, the Netherlands, Singapore, Sweden, the United Kingdom, and the United States. Throughout the event, attendees actively participated in various interactive sessions designed to enhance their comprehension of the following key areas: the implications of peaceful AI research and innovation on international peace and security, strategies for preventing or mitigating associated risks through responsible research and innovation, and ways to advocate for the advancement of responsible AI for peace and security. The workshop was facilitated by experts from SIPRI, with additional contributions from professors representing Umeå University and Malmö University. This workshop series, supported by the European Union, falls under the initiative ‘Responsible Innovation in AI for Peace and Security’, jointly conducted by SIPRI and UNODA, and is scheduled to continue until 2024. The upcoming workshop is set to take place in Tallinn, Estonia, on 14–15 February 2024, in collaboration with Tallinn University of Technology.
About the SIPRI Governance of AI Programme
The SIPRI Governance of AI Programme aims to enhance the understanding of AI’s impact on global peace and security. The program’s AI research delves into various topics, including the potential applications of AI in conventional, cyber, and nuclear systems, the implications of military AI utilization on humanitarian and strategic fronts, as well as the opportunities it presents for arms control and verification. Furthermore, the program explores how the risks associated with AI can be effectively managed through international law, arms control mechanisms, and responsible research and innovation.