With AI, like any other domain, narratives play a crucial role. Myths and stories hold significance. AI companies often promote the narrative of AI being an arms race, which serves as a justification for their rapid market entry. However, this narrative might lead people to believe that accelerating the development of advanced AI is necessary, even if it escalates the risk of human extinction.
Katja Grace challenges this prevailing myth. As the principal researcher at AI Impacts, a nonprofit project on AI safety under the Machine Intelligence Research Institute, Grace argues that AI should not be viewed as an arms race. In a Time article published in May, she highlighted that “the AI scenario differs significantly. Unlike in a traditional arms race, where a party could potentially surge ahead and emerge victorious, in the realm of AI, the ultimate winner might be advanced AI itself. This dynamic renders hastiness a losing strategy.”
Furthermore, Grace emphasizes that if a particular laboratory or nation invests time in addressing AI safety concerns instead of rushing forward, others may adopt these improvements, leading to collective benefits. She illustrates this concept by stating:
“A more fitting analogy for AI could be a group standing on fragile ice, with abundant treasures awaiting on the distant shore. They could all reach the riches by proceeding cautiously, but one individual thinks: ‘If I sprint, the ice may crack, risking a collective downfall. Yet, I believe I can sprint more cautiously than Bob, and he might take the chance.’”
In the context of AI, we might find ourselves in a scenario diametrically opposite to a race. Opting for a slow and prudent approach could be the optimal individual course of action. Collectively, we must avoid recklessly endangering the world in a misguided race towards destruction, especially when avenues for coordinated solutions remain largely unexplored.
Grace introduces a counter-narrative: less about an arms race and more reminiscent of the tragedy of the commons—a classic coordination dilemma with known, albeit theoretical, solutions.
Moreover, Grace has contributed significantly to shaping the discourse surrounding AI risks. In a widely referenced survey of machine learning researchers conducted in the summer of 2022, 48% of respondents expressed concerns about a 10% or higher likelihood of AI’s consequences being “extremely dire (e.g., human extinction).”
The methodology employed to curate this year’s Future Perfect 50 involved an extensive process. Commencing with the previous year’s list, the team engaged in brainstorming, thorough research, audience interaction, and source connectivity. Striving for diversity across various dimensions such as theories of change, academic disciplines, age, geographical locations, and identities, the team aimed to avoid overrepresentation in any single category.
Since late 2022, Grace has been a compelling advocate urging stakeholders to contemplate slowing down AI advancement. Contrary to the belief that technological progress is inevitable and resistance is futile, Grace highlighted on her blog that there exist numerous technologies we have chosen not to pursue, or have regulated tightly—examples include human cloning and human germline modification.
Not long ago, advocating for a deceleration in AI development was considered taboo within the AI community, requiring courage to voice such opinions. Over time, Grace’s initial calls have been echoed in successive open letters signed by concerned technologists. Presently, in late 2023, advocating for a cautious approach is a relatively common stance.
Grace has not only demonstrated her courage but also underscored the power of reshaping narratives.
Would you like to support Vox’s explanatory journalism?
While many news outlets rely on advertising or subscriptions for revenue, Vox faces challenges with these models. Fluctuating advertising revenue tied to economic conditions makes long-term planning difficult. Moreover, Vox aims to make complex issues accessible to all, not just subscription-paying readers, as part of fostering a more inclusive society. To sustain their work, Vox seeks grants and reader support while upholding strict editorial independence, irrespective of funding sources.