There has been significant discourse surrounding the potential societal impacts of AI in recent times. Despite assertions from business leaders and AI advocates about the technology’s rapid advancement leading to catastrophic outcomes globally, a comprehensive survey of 2,778 AI researchers revealed a more nuanced perspective.
While a majority of respondents acknowledged the potential for existential threats, they expressed skepticism about the likelihood of such dramatic consequences. Approximately 58% of those surveyed estimated the probability of human extinction or severe repercussions resulting from AI advancements to be around 5%.
Published by researchers and scientists from prestigious institutions such as Oxford and Bonn, the survey delved into future technology timelines and the cultural implications of AI progress. According to Katja Grace, one of the study’s authors, the findings signal a cautious acknowledgment among AI researchers regarding the plausibility of advanced AI posing risks to humanity, indicating a prevalent sense of concern.
In the realm of Silicon Valley, debates have ensued over the years regarding the actual threat AI may pose to humanity. Notably, prominent AI experts like Yann LeCun and Andrew Ng, co-founder of Google Brain, have refuted extreme doomsday scenarios. LeCun even accused tech executives like Sam Altman of harboring concealed motives to stoke fears about AI.
In a recent development, LeCun highlighted attempts by leading AI companies to manipulate the regulatory landscape by advocating for stringent regulations, a tactic he referred to as “regulatory catch.”