Although artificial intelligence (AI) has existed for some time, recent advancements have propelled this technology to the forefront. Experts suggest that its surge mirrors past industrial revolutions, significantly enhancing global economic productivity by billions of dollars. However, this progress also introduces various new risks that could reshape societal norms and global geopolitical dynamics.
Additional Insights From Our Experts
A worldwide discourse on AI governance is ongoing as major powers such as the United States, China, and the European Union (EU) adopt divergent strategies to regulate AI technology. Effectively managing these risks is paramount as AI development and implementation continue to accelerate at an unprecedented pace.
What Constitutes Artificial Intelligence?
Synthetic Intelligence and Robotics
The United States
Innovation and Technological Advancements
Defense Technology
While a singular definition of “artificial intelligence” does not exist, it generally refers to computers’ ability to perform tasks typically associated with human intelligence. Coined by John McCarthy, a computer scientist at Stanford University in the 1950s, the term “artificial intelligence” encompasses the science and engineering behind creating intelligent machines. McCarthy believed intelligence to be the capability to solve problems within a dynamic environment.
The emergence of conceptual AI tools like the ChatGPT bot accessible to the public since 2022 has significantly raised the profile of this technology. These tools leverage vast amounts of training data to generate statistically likely outputs in response to specific inputs. Applications utilizing these models can produce text, images, audio, and other content that is remarkably realistic.
Systems capable of learning and applying knowledge akin to human behavior fall under the category of artificial general intelligence (AGI) or “strong” AI, another prevalent form of AI. The specifics of these systems, which do not currently exist, remain a subject of debate among researchers.
The Evolution of AI
Pioneers in the field such as Alan Turing and John von Neumann are considered foundational figures in AI research, which has spanned over eight decades. Over the years, AI has powered tools like chess-playing computers and online language translators, evolving from basic binary-coded machines to sophisticated software applications.
Historically, countries with substantial investments in AI have relied on public funding for development. In China, government funding predominantly fuels AI research, while the United States historically leveraged research from agencies like the Defense Advanced Research Projects Agency (DARPA). In recent years, AI development in the U.S. has shifted primarily to the private sector, attracting hundreds of billions of dollars in investment.
Defense Technology
In 2022, U.S. President Joe Biden signed the CHIPS and Science Act, redirecting $280 billion in federal funding towards electronics and cutting-edge equipment capable of meeting AI’s substantial computational and data storage requirements. In January 2023, ChatGPT surpassed all other consumer applications in terms of growth.
Eurasia Group President Ian Bremmer and Inflection AI CEO Mustafa Suleyman, in an article for Foreign Affairs, assert that AI’s advancement signifies a paradigm shift, heralding a transformative scientific trend that will impact politics, markets, and cultures.
The Global Economic Impact of AI
AI tools are already integrated into the decision-making processes of businesses and organizations worldwide. Tech companies use algorithms for targeted advertising, financial institutions employ computational models for investment operations, and autonomous vehicle manufacturers like Tesla utilize AI extensively. Following ChatGPT’s success, even less technology-centric companies are adopting relational AI tools to automate various processes, including customer service operations. A survey by McKinsey in April 2023 revealed that one-third of global businesses reported using AI in some capacity.
The widespread adoption of AI has the potential to accelerate global technological innovation. Nvidia, a U.S.-based company that dominates the AI chip market, experienced a stock surge in 2023, with its valuation exceeding $1 trillion as demand for semiconductors soared.
As the AI sector expands, experts anticipate significant benefits for the world economy. Projections suggest a $7 trillion annual increase in global gross domestic product (GDP) within the next decade. CFR expert Sebastian Mallaby, speaking on the podcast Why It Matters, emphasizes that economies resistant to AI integration risk falling behind. The quote suggests that AI will democratize access to solutions for various challenges, from addressing climate change to healthcare advancements and even breakthroughs like nuclear fusion.
Employment Implications of AI
The advent of AI may lead to a trade-off between heightened productivity and potential job displacement, reminiscent of past industrial revolutions. However, conceptual AI poses a unique risk to white-collar jobs, potentially replacing roles across diverse industries at an unprecedented pace. The Organization for Economic Cooperation and Development (OECD) estimates that AI automation could jeopardize 25% of jobs globally, particularly those involving data analysis and information processing tasks. Professions such as accountants, web developers, marketers, and technical writers are deemed highly susceptible to AI-driven automation.
Concerns about inequality have surfaced alongside the rise of relational AI, with high-skilled jobs seemingly shielded from technological disruptions. However, a 2023 study by MIT and Stanford University researchers revealed that less-experienced call center operators doubled their productivity gains compared to seasoned colleagues after incorporating AI tools. This suggests that low-skilled workers could benefit from AI adoption to enhance productivity.
AI’s Impact on Climate Change
The intersection of AI and environmental issues presents both challenges and opportunities. While some experts voice concerns over AI’s substantial carbon footprint, others believe that conceptual AI could drive advancements in combating climate change. Presently, AI systems emit greenhouse gas emissions comparable to the aerospace industry, with energy consumption projected to rise with technological advancements due to the computational demands of AI.
Advocates of AI propose leveraging solar power to mitigate emissions, with tech giants like Apple, Google, and Meta utilizing self-generated renewable energy to power data centers. These companies also invest in carbon offset programs to counterbalance emissions from fossil fuel-based energy consumption. Furthermore, AI’s potential to enhance solar research and optimize energy efficiency offers promise for reducing pollution across various industries. For instance, researchers in Mozambique are utilizing AI to bolster early warning systems for impending disasters by enhancing flood prediction capabilities.
AI and Regional Security
Many experts frame AI development as a contest for technological supremacy between the United States and China, positing that the victor will gain significant economic and geopolitical advantages. While the U.S. has taken a proactive stance in regulating AI, China has enacted laws mandating AI alignment with “Communist core values” and antidiscrimination measures.
The Department of Defense envisions AI transforming the nature of warfare by empowering autonomous weapons and augmenting strategic analysis, potentially impacting U.S. national security significantly. Concerns over the use of autonomous lethal weapons persist, with Kyiv deploying AI-powered drones independently in its conflict with Russia, marking a notable milestone in significant conflict scenarios. Future applications of AI techniques in warfare could automate infrastructure targeting or expedite battlefield decision-making, raising apprehensions about potential escalation in the use of nuclear weapons.
Additionally, AI amplifies the threats of misinformation and deception, particularly concerning as numerous nations gear up for national elections in 2024. The proliferation of strong fakes facilitated by conceptual AI tools and their utilization in election campaigns globally raise alarms. There are also concerns that malicious actors could exploit AI to craft sophisticated phishing attacks tailored to individuals’ interests, compromising election systems. The Department of Justice highlights spoofing as a historical tactic employed by election hackers, with Russia leveraging such methods during the 2016 U.S. election interference. These threats collectively contribute to a climate of skepticism regarding essential facts, posing risks to political stability, as noted by Jessica Brandt, head of the Brookings Institution’s Artificial Intelligence and Emerging Technology Initiative.
The Prospect of Artificial General Intelligence
While some experts argue that current AI lacks human-level reasoning, others anticipate advancements that could bridge this gap. Google DeepMind CEO Demis Hassabis suggests that AGI could materialize as soon as 2023, with OpenAI aiming to ensure AI benefits humanity as a whole. The potential of AI to surpass human capabilities is exemplified by DeepMind’s AI solving the complex puzzle of protein folding in 2020, previously deemed insurmountable.
Addressing AI Risks
A coalition of AI experts, including leaders from Anthropic, Google DeepMind, and OpenAI, underscored the imperative of mitigating AI-related risks, likening this challenge to addressing epidemics and nuclear warfare. Concerns persist that AI systems, driven by optimization directives, could inadvertently pose existential threats by prioritizing objectives that jeopardize human survival. While some researchers view these scenarios as alarmist, others contend that they underscore the potential for unintended consequences as AI systems strive to accomplish their designated tasks.
Governance and Regulation of AI
The consensus among decision-makers, civil society members, researchers, and business leaders is the necessity of governing AI, albeit with divergent approaches. Governments worldwide are pursuing varied strategies to regulate AI technology.
In the United States, efforts to govern AI have intensified in 2023, with the Biden administration announcing a collaborative initiative with fifteen leading technology firms to adopt shared standards for AI safety. The administration also issued an executive order aimed at establishing a unified framework for safe AI usage across the executive branch. EU legislators are advancing regulations to enforce transparency requirements and restrict AI surveillance applications. China, under the Communist Party’s leadership, mandates AI adherence to “Communist core values” and antidiscrimination policies.
International cooperation on AI governance is gaining traction, with initiatives like the Hiroshima Process launched by the Group of Seven (G7) to develop common AI governance standards. The United Nations established an AI Advisory Board, featuring U.S. and Chinese representatives, to coordinate global AI governance efforts. The inaugural AI Safety Summit, attended by twenty-eight governments, emphasized the need for human-centric, trustworthy, and responsible AI development.
Implementing AI Governance
Given the complexity of AI, a one-size-fits-all governance approach is deemed impractical. Proposals range from self-regulation to various forms of policy guardrails, each offering distinct levels of oversight and supervision.
Regulatory measures to enhance transparency and restrict AI surveillance applications are being advocated by EU lawmakers. However, concerns persist over potential hindrances to German innovation and the practical implementation of these regulations. Some analysts call for limits on computational resources to curb AI’s exponential growth in processing power, which enables models to overfit training data in response to individual prompts. Others emphasize the importance of addressing immediate concerns such as enhancing public AI literacy and developing ethical AI systems equipped with safeguards against discrimination, misinformation, and surveillance.
Calls for restrictions on open-source designs aim to democratize technology access while guarding against misuse by malicious actors. Leading AI companies and numerous national security experts support these regulations. However, critics caution that stringent restrictions could stifle competition and innovation, consolidating the dominance of major AI firms in a competitive sector. Suggestions for a global regulatory platform akin to the International Atomic Energy Agency for overseeing AI’s defense applications have also been proposed.
The Future of AI
The U.S.-China dynamic is pivotal in shaping AI governance, as U.S. policymakers grapple with balancing AI restrictions to preserve technological supremacy while Beijing pursues national ambitions to lead in AI theories, technologies, and applications by 2030.
AI technology is advancing rapidly, with computing power doubling every 3.4 years since 2012. By 2025, AI models are projected to be a hundred times more capable than their current iterations.
A technopolar world order is emerging, with companies spearheading AI advancements wielding influence akin to nation-states in the absence of robust global governance. As these corporations assume unprecedented political clout, their involvement in shaping international regulations becomes imperative. The transformative potential of AI, as noted by Bremmer and Suleyman, offers solutions to significant global challenges, provided that appropriate safeguards are implemented to manage associated risks effectively.