Political communities must collaborate to leverage the potential of AI for positive outcomes.
Authored by Kay Firth-Butterfield
An abundance of discussions have focused on the risks associated with AI since the emergence of large language models that began generating articles in the autumn of 2022. The discourse surrounding these systems and their applications has been ongoing since 2014. However, the current scenario indicates that the conversation about leveraging AI to address critical global issues is at risk of being overshadowed.
The crux lies in effective management. In order to reap the benefits of AI, the AI community must earn the trust of the public, a feat that can only be achieved through regulation. Looking ahead, it is imperative to prioritize the safety of existing systems, a concept referred to as concerned AI. A survey conducted by the AI Policy Institute in spring 2023 revealed that over 60% of Americans harbor concerns regarding the adverse impacts of AI. Without robust regulations in place, mitigating these concerns and equipping individuals with the necessary tools to address them will be an uphill battle.
At a time when public confidence in AI is paramount, there has been a noticeable decline in trust within political spheres. A Luminate study found that 70% of British and German citizens who claimed to comprehend AI expressed apprehension about its influence on their voting processes. Conversely, an Axios/Morning Consult survey indicated that over half of Americans believe that AI will play a role in the outcomes of the 2024 election, with a significant portion expressing doubts about its impact on their trust in the electoral process. Additionally, a survey by the American Psychological Association highlighted that 40% of American employees fear losing their jobs to AI, while a staggering 79% do not support companies self-regulating their use of AI. Failure to address these apprehensions may impede the economic and societal benefits that AI can offer.
In a more optimistic light, a study conducted by PwC in 2021 yielded encouraging results. Researchers analyzing over 90 sets of ethical AI principles from various organizations globally identified nine fundamental social concepts unanimously agreed upon, including responsibilities, data privacy, and human interaction. Governments are tasked with translating these principles into actionable measures by fostering international collaboration to navigate the complexities of preparing for an uncertain future.
Continuing to passively adapt to technological advancements without proactive measures may lead to a reality where human needs are no longer met. While the risk-mitigation strategy adopted by the European Union addresses current concerns, it falls short in addressing the fundamental issue of human-AI interaction in the future. On the other hand, individual states in the U.S. formulating their own regulations could potentially hinder progress and complicate participation.
Collaboration among democratic institutions worldwide, in conjunction with stakeholders from civil society, academia, and industry, is essential to establish guidelines that set forth the criteria for businesses globally to adhere to in the development, deployment, and utilization of AI systems. Policymakers play a pivotal role in defining objectives such as privacy and data protection, especially considering that users of AI may not fully comprehend the potential negative repercussions even when utilizing it for beneficial purposes. To ensure the integrity of AI systems from inception, development teams must adopt best practices and comply with existing and emerging regulations.
While international regulations or treaties may seem like a plausible solution to bridge governance gaps, challenges exist. The UN Security Council often faces deadlock even on issues related to harm mitigation, let alone those demanding forward-thinking approaches. Despite calls for action since 2013, an agreement on the control of lethal autonomous weapons remains elusive. Therefore, expecting the Security Council to proactively design a universally accepted AI policy is ambitious. Although the establishment of a high-level panel on AI by the United Nations is a positive step, meaningful regulation may not materialize swiftly enough. Urgent action is imperative as time is of the essence.
International cooperation need not solely rely on the UN; alternative models such as those demonstrated by the European Organization for Nuclear Research or the vaccine alliance Gavi could serve as viable frameworks. By following this trajectory, the dominance of AI technology by the Global North could be mitigated, thus fostering inclusivity and ensuring the benefits of AI are accessible to individuals from diverse cultural backgrounds. Collaborative efforts among governments worldwide are crucial in shaping regulations that pave the way for a prosperous future for their citizens.
Effective governance is complex, and achieving true global governance presents even greater challenges. While progress may be gradual, companies involved in designing, developing, and utilizing AI must practice self-regulation with unwavering support from their leadership in the interim. Collaboration is indispensable in creating a future where AI enriches human experiences rather than dictating them. An all-encompassing strategy is indispensable, and immediate action is warranted.
Kay Firth-Butterfield, the world’s inaugural chief AI ethics officer, former head of artificial intelligence at the World Economic Forum, and CEO of Good Tech Advisory, emphasizes the urgency of collective action in navigating the evolving landscape of AI governance.