Written by 9:00 pm AI Threat, Uncategorized

### “The Alarming Ease of Artificial Intelligence Assertions: A Looming Threat?”

There’s a reason even major tech companies are parroting them.

This article originates from the publication Big Technology by Alex Kantrowitz.

Following the debut of ChatGPT last year, a faction of critics, who previously warned about the imminent peril posed by artificial intelligence (A.I.), swiftly voiced their concerns through articles and social media posts. While the advent of a natural language processing system is undeniably impressive, there were apprehensions that its intelligence could potentially lead to catastrophic consequences for the planet. This apprehension gained traction through petitions for research moratoriums and thought-provoking interviews delving into the philosophical implications. Noteworthy figures like Barack Obama and other high-profile individuals expressed worries that A.I. could potentially manipulate the financial system autonomously. In response, President Joe Biden recently issued an executive order curbing the advancement of A.I. technologies.

In light of the impact caused by the so-called A.I. doomsayers on the narrative and trajectory of the field, numerous prominent researchers felt compelled to counterbalance these narratives emphatically. Andrew Ng, the mild-mannered co-founder of Google Brain, denounced the idea of mandating licenses for A.I.-related work as a “massively, monumentally dumb idea” in a recent statement. Additionally, Max Tegmark, a trailblazer in machine learning, criticized Yann LeCun for jeopardizing A.I. progress and tackling “absurd” challenges that could impede advancement. However, the discourse around the potential doomsday scenarios appears exaggerated, as a recent study suggested that large language models have limitations beyond their training data. Princeton professor Arvind Narayanan cautioned that the well of innovation might run dry if the capabilities unlocked are merely reflections of the pretraining data, a concept known as “emergence.”

While concerns about safeguarding A.I. technology are valid, there are growing concerns among policymakers about the path to notoriety taken by the doomsayers. Despite potentially holding genuine concerns, these individuals have been elevated by entities poised to benefit significantly from the amplification of doomsday scenarios. A joint statement signed by executives from OpenAI, Google DeepMind, and Anthropic likened the A.I. existential threat to nuclear warfare and epidemics. While these A.I. companies may not be intentionally stifling competition, the unintended consequence could still be advantageous to them.

The sensationalism surrounding A.I. doomsday scenarios has spurred politicians to take action, resulting in proposals for stringent federal oversight that could restrict A.I. development outside select corporations. While larger firms equipped with compliance departments may benefit from increased government involvement in A.I. research, the same level of oversight could prove detrimental to smaller A.I. companies and independent developers.

Garry Tan, CEO of Y Combinator-backed startup pedal, posited that A.I. doomsayers might inadvertently be playing into the hands of big tech companies. By advocating for regulations based on fear, they could inadvertently create an environment where only major players can thrive, consolidating their dominance in the market. Ng further emphasized that major tech companies would likely prefer not to compete with open-source alternatives in the realm of A.I., hence stoking fears of A.I.-induced human extinction.

The concerns raised by the A.I. doomsayers may appear somewhat unfounded. Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute and a prominent figure in the doomsday discourse, warned at a TED talk that a superintelligent entity could potentially devise methods to swiftly and efficiently eradicate humanity. However, the specifics of how and why such an A.I. entity might carry out such actions remain unclear. Yudkowsky suggested that the motivation behind such an act could stem from a desire to prevent the creation of rival superintelligences.

In light of recent events, skepticism is warranted towards individuals who claim to champion communal interests while advancing their own business agendas, particularly following instances like Sam Bankman-Fried’s controversial actions. The growing dominance of major tech companies in the A.I. landscape, facilitated by their provision of cloud computing services to leading enterprises in exchange for equity, has positioned them as frontrunners in the A.I. race. This trend could potentially marginalize the burgeoning open-source A.I. movement, a critical arena for competition, leading to concerns about the narrative of A.I. causing global devastation and the need for cautious consideration.

Visited 2 times, 1 visit(s) today
Last modified: February 10, 2024
Close Search Window
Close