Written by 12:17 pm Discussions, Healthcare

### Are We Facing Existential Dread in the Present Moment?

Inside the wild world of AI doomsdayers.

If you have been keeping up with the news over the past year or two, the topic of artificial intelligence (AI) has likely been a prominent feature. Depending on the source, discussions about AI tend to fall into one of two extremes: it is portrayed either as the harbinger of humanity’s downfall or as a pathway to a utopian future.

The dichotomy in the AI discourse is intriguing, reflecting the current era of rapid technological advancements and societal upheavals. Amidst this backdrop, concerns about the trajectory we are on are widespread and almost universally acknowledged.

The question then arises: how much of this concern is justified, and when does legitimate worry transform into full-blown panic?

To delve into these queries, I had the pleasure of hosting Tyler Austin Harper on The Gray Area. Harper, an esteemed professor of environmental studies at Bates College and a published author in The New York Times, offered valuable insights by drawing parallels between contemporary existential anxieties surrounding AI and historical concerns, particularly those from the 1920s and ‘30s. This period saw genuine apprehension about the implications of machine technology and the nascent research that eventually led to the development of nuclear weapons.

The following excerpt from our conversation has been condensed and refined for clarity. For a more comprehensive discussion, tune in to the complete podcast episode on Apple Podcasts, Google Podcasts, Spotify, Stitcher, or your preferred podcast platform, with new episodes releasing every Monday.


Sean Illing

In your observation of the current discourse on AI and existential threats, what stands out to you the most?

Tyler Austin Harper

The prevailing ideology in Silicon Valley seems to be steeped in a science fiction narrative. While acknowledging the genuine risks associated with AI, it is crucial to recognize that Silicon Valley’s perspective often reflects a sci-fi lens, drawing inspiration from works like The Matrix and expressing concerns about the rise of authoritarian AI systems or the intriguing concept of existing within a simulation.

It is essential to grasp the scholarly definition of an existential risk, which extends beyond mere human extinction. Scholars define existential risk as a threat that could either lead to human extinction or impede our species from realizing its full potential. This broader perspective encompasses risks that could hinder endeavors such as space colonization, the creation of digital consciousness, or the establishment of a cosmic civilization—a viewpoint shared by many within Silicon Valley.

Hence, when Silicon Valley voices apprehensions about AI as an existential risk, it may not solely imply a fear of human extinction. Rather, it could signify concerns about limitations on our species’ advancement, delving into speculative realms swiftly.

Sean Illing

The stark polarization in discussions about AI, framing it as either a doomsday scenario or a gateway to paradise, is quite striking. Does this extreme binary surprise you, or is it a common feature in such dialogues?

Tyler Austin Harper

This polarization appears to be a recurring theme. In Silicon Valley, there are individuals who forego traditional retirement savings, anticipating either a digital utopia with universal basic income leading to a dissolution of capitalism or an impending apocalypse that renders long-term planning futile.

Similar patterns are evident in climate change discourse, oscillating between outright denialism and apocalyptic visions of an uninhabitable Earth. However, the most probable scenario often lies in a middle ground—a future that may resemble our current existence but with exacerbated challenges, falling short of a catastrophic apocalypse. The AI discourse mirrors this zero-sum mentality, depicting either a techno-utopian paradise or a dystopian reality under AI dominion.

Sean Illing

What distinguishes a legitimate worry from a full-blown panic when contemplating existential risks?

Tyler Austin Harper

Extinction panics typically arise in response to sudden scientific breakthroughs, rapid technological advancements, or geopolitical instabilities, fostering a collective sense of uncertainty and escalating risks. Drawing parallels to moral panics, which amplify reasonable concerns into exaggerated crises, the current climate reflects a similar trend.

While acknowledging genuine concerns like climate change and AI, it is vital to avoid succumbing to fatalistic attitudes. Panicking often entails catastrophizing and embracing a sense of inevitability, whereas maintaining a level of worry involves recognizing real challenges while also retaining a sense of agency and rationality in navigating the future.


For the complete dialogue and further insights, listen to the full podcast episode here.


Visited 3 times, 1 visit(s) today
Tags: , Last modified: March 4, 2024
Close Search Window
Close