Many researchers in the field of artificial intelligence envision a future where superhuman AI could potentially lead to human extinction, although there is considerable debate and uncertainty surrounding this possibility.
These insights stem from a comprehensive survey of 2700 AI experts who have recently presented their work at six prominent AI conferences, making it the most extensive survey of its kind to date. The participants were asked to provide their perspectives on the potential timelines for upcoming AI technological breakthroughs and the societal impacts, both positive and negative, that may result from these advancements. Nearly 58 percent of the researchers indicated that they believe there is at least a 5 percent likelihood of human extinction or other severe consequences linked to AI.
Katja Grace, affiliated with the Machine Intelligence Research Institute in California and one of the paper’s authors, emphasized the significance of the fact that a majority of AI researchers do not dismiss the notion that advanced AI could pose a threat to humanity. She stated, “It’s a noteworthy indication that most AI researchers do not find it entirely implausible that advanced AI could lead to the destruction of humanity. I believe the general acknowledgment of a non-negligible risk speaks volumes, regardless of the specific percentage.”
Despite these concerns, Émile Torres from Case Western Reserve University in Ohio advised against immediate alarm. Torres pointed out that expert surveys in the AI field have not consistently accurately predicted future developments in AI. A study from 2012 revealed that over the long term, AI expert forecasts were no more reliable than public opinions from non-experts. Furthermore, the authors of this recent survey highlighted that AI researchers may not possess the expertise required to forecast the future trajectory of AI accurately.
In comparison to responses from a previous survey conducted in 2022, many AI researchers now anticipate that AI will reach certain milestones sooner than previously anticipated. This trend aligns with the introduction of ChatGPT in November 2022 and the subsequent rapid deployment of similar AI chatbot services based on extensive language models across Silicon Valley.
The surveyed researchers predicted that in the next decade, AI systems will have a 50 percent or higher probability of mastering most of 39 sample tasks. These tasks include generating new songs that closely resemble a hit by Taylor Swift or developing an entire payment processing platform from scratch. However, tasks such as physically installing electrical wiring in a new residence or solving enduring mathematical enigmas are anticipated to take more time.
The likelihood of AI surpassing human capabilities in all tasks was estimated at 50 percent by 2047, while the possibility of all human jobs becoming fully automatable was projected to reach a 50 percent likelihood by 2116. These projections indicate a shift of 13 years and 48 years earlier, respectively, compared to the previous year’s survey.
Nonetheless, Torres cautioned that the heightened expectations surrounding AI advancements could potentially be misguided. He remarked, “Many of these breakthroughs are quite unpredictable. There remains a distinct possibility that the field of AI may experience another period of stagnation,” alluding to the decline in funding and corporate interest in AI during the 1970s and 80s.
In addition to concerns about the future implications of superhuman AI, many AI researchers expressed immediate apprehensions about existing AI applications. A significant majority, exceeding 70 percent, identified AI-driven scenarios involving deepfakes, manipulation of public opinion, development of weapon systems, authoritarian control over populations, and exacerbation of economic disparities as either substantial or extreme concerns. Torres also underscored the risks associated with AI contributing to the dissemination of misinformation on critical issues like climate change or the deterioration of democratic governance.
Torres concluded by highlighting the current technological capabilities that could potentially undermine democracy, particularly in the context of the upcoming 2024 election in the United States.