Written by 8:25 am AI, Latest news

### AI Unleashed: Impending Philosophical Crisis Without Control

“The outcome could be prosperity or extinction, and the fate of the universe hangs in the bal…

According to a scholar, artificial intelligence (AI) has the potential to trigger an “existential catastrophe” for humanity.

Roman Yampolskiy, an associate professor specializing in computer engineering and science at the University of Louisville’s Speed School of Engineering, has concluded, after an extensive review of relevant literature, that there is no concrete evidence to suggest that AI can be effectively controlled. He argues that even if some measures of control are established, they may not suffice.

Yampolskiy asserts that in the absence of such evidence, the development of AI should be reconsidered. As an AI health expert, he highlights the lack of comprehensive understanding, vague definitions, and inadequate research surrounding this technology.

While acknowledging AI’s potential to reshape society significantly, Yampolskiy questions whether this transformation will necessarily benefit humanity. In his upcoming book, “AI: Indescribable, Unstable, and Uncontrollable,” he delves into the potential risks associated with AI advancement.

Describing the looming scenario as an almost inevitable existential crisis, Yampolskiy emphasizes the critical nature of the situation. He warns that the future of the universe hangs in the balance, with outcomes ranging from prosperity to extinction.

The researcher challenges the prevailing belief among scientists that the challenges posed by AI can be effectively managed. He stresses the need for concrete evidence supporting this assumption before embarking on the development of controlled AI. Yampolskiy advocates for substantial investment in AI safety measures, citing the near certainty of AI superintelligence development.

Yampolskiy maintains that our ability to create sophisticated AI surpasses our capacity to regulate or validate it effectively. Despite the potential benefits, he argues that advanced AI systems will always carry inherent risks that cannot be entirely mitigated. This conclusion follows a meticulous review of available literature.

Yampolskiy suggests that the AI community should prioritize risk reduction while striving to maximize potential benefits.

For research article suggestions or inquiries related to artificial intelligence, readers are encouraged to contact science@newsweek.com.

Visited 2 times, 1 visit(s) today
Tags: , Last modified: February 12, 2024
Close Search Window
Close