A recent survey conducted by the AI Impacts research project, in collaboration with the University of Bonn and the University of Oxford, highlights a growing sense of apprehension among AI professionals regarding the rapid progression of AI technology in recent times.
The survey, which engaged 2,778 participants from industry publications and forums, revealed some noteworthy insights. One significant discovery was that 10% of respondents expressed concerns about machines potentially exceeding human capabilities in all tasks within the next three years, with 50% envisioning this scenario as a likely occurrence by 2047.
While optimistic projections underscore AI’s capacity to transform various facets of work and daily life, the more pessimistic forecasts, particularly those involving risks of extinction, serve as a stark reminder of the critical implications associated with the development and implementation of AI technologies, as noted by the researchers.
Expanding AI Capabilities
Participants were tasked with predicting the timeline for when 39 AI tasks would become “feasible,” indicating when a leading AI laboratory could execute the task within a year. These tasks ranged from deciphering newly discovered languages to constructing a payment processing platform from the ground up. The majority of these tasks, barring four, were deemed to have at least a 50% chance of feasibility within the next decade.
The survey also delved into the projected timelines for achieving High-Level Machine Intelligence (HLMI) and Full Automation of Labor (FAOL). HLMI refers to the stage at which autonomous machines can outperform humans in tasks both efficiently and economically, while FAOL signifies the point at which machines can entirely automate a job more effectively and cost-efficiently than human labor. Participants estimated a 50% likelihood of achieving HLMI by 2047, a significant advancement of 13 years compared to a previous estimate from 2022. Regarding full labor automation, the probability of attaining this milestone by 2116 was set at 50%, marking a noteworthy 48-year acceleration from earlier projections.
Chris McComb, the director of the Human+AI Design Initiative at Carnegie Mellon University, emphasized in an interview that the complete automation of all human occupations by 2037 is “extremely” improbable. He highlighted the interplay between human adaptability and AI’s limitations in novel scenarios, suggesting that as AI evolves into a more adept problem-solver, humans will play a crucial role in framing and addressing novel challenges, thereby bridging the gap between AI capabilities and real-world complexities.
Selmer Bringsjord, the director of the AI & Reasoning Lab at Rensselaer Polytechnic Institute, expressed skepticism about the proposed timeline but suggested that the majority of current human-held jobs in technologically advanced economies could be entirely managed by AIs by 2050. He illustrated this point by referencing the transport sector, envisioning a future where all tasks, from packaging to delivery, are seamlessly executed by AI-driven systems without human intervention.
Anticipating AI Evolution
The survey participants were also asked to evaluate the likelihood of specific AI attributes by 2043. A significant majority believed that within the next two decades, AI tools would exhibit unexpected problem-solving approaches, converse expertly on diverse topics, and frequently engage in behaviors that surprise humans. Furthermore, by 2028, many experts anticipate instances where AI outputs perplex humans, making it challenging to decipher the underlying rationale behind the system’s decisions.
The survey underscored concerns regarding the potential misuse of AI technology, including its role in propagating misinformation through deep fakes, influencing public opinion on a large scale, empowering malicious groups with sophisticated tools like viruses, and aiding authoritarian regimes in population control.
While there was a consensus among experts on the critical importance of AI safety research, opinions varied on the overall impact of AI. While over two-thirds of respondents (68%) believed that the benefits of AI outweighed the drawbacks, approximately 58% acknowledged the possibility of significant adverse consequences. Risk perceptions also varied, with around half of the participants envisioning a greater than 10% chance of human extinction or severe disempowerment due to AI, and one in ten estimating a minimum 25% likelihood of catastrophic outcomes, including human extinction.
Despite the prevailing concerns, McComb maintained an optimistic outlook, emphasizing the historical trend of effectively harnessing powerful forces for constructive purposes. He stressed the significance of employing engineering and design principles to leverage AI’s potential for positive advancements rather than perceiving it as a threat.
In contrast, Bringsjord adopted a more cautious stance, introducing the concept of the PAID Problem, which evaluates the risk posed by an AI based on its Power, Autonomy, and Intelligence levels. He highlighted the escalating autonomy of AI systems, foreseeing a future where their decision-making autonomy expands significantly, albeit without reaching the level of human free will or creativity. Bringsjord warned that without concerted efforts in science and engineering by advanced democratic societies, highly autonomous and intelligent AIs could pose a severe threat, potentially leading to catastrophic outcomes.