According to a recent report, the intelligence of AI, exemplified by systems like ChatGPT, fundamentally differs from human intelligence due to its lack of embodiment and comprehension. This distinction underscores that AI has no inherent connections to or concerns for people.
Anthony Chemero, a professor at UC, recently authored a paper elucidating the disparities between AI cognition and human thought processes.
The surge of artificial intelligence has elicited varied responses from technology professionals, government officials, and the general public. Many individuals express enthusiasm for AI systems such as ChatGPT, viewing them as valuable tools capable of reshaping society.
However, there is also a sense of apprehension among certain groups who fear that any technology labeled as “intelligent” may potentially surpass human authority and oversight.
Human Intelligence versus AI’s Unique Attributes
Anthony Chemero, a professor of philosophy and psychology at the University of Cincinnati’s UC College of Arts and Sciences, argues that linguistic terminology muddles the understanding of AI, suggesting that while AI is clever, it does not acquire knowledge in the same manner as humans, even though “it does exist and BS like its maker.”
In a paper co-authored for the journal Nature Human Behaviour, Chemero posits that while AI undeniably possesses cleverness in the conventional sense, it operates as sophisticated computing systems that have existed for years.
Attributes and Limitations of AI
The paper initiates by highlighting that ChatGPT and similar AI systems are substantial language models (LLMs) trained on vast datasets extracted from the internet, much of which reflects the biases of the data contributors.
Chemero contends that while LLMs generate impressive text, they often fabricate content entirely. They can construct linguistic expressions but necessitate significantly more training compared to humans. He asserts that they lack true comprehension of the meaning behind their statements, primarily due to their non-embodied nature, setting them apart from human cognition.
Despite suggesting that it might be more apt to characterize it as “bullsh*tting,” the developers of LLMs essentially generate sentences by iteratively appending the statistically most probable next word, devoid of concern for accuracy or truthfulness.
Furthermore, he asserts that with slight manipulation, one can prompt an AI tool to propose “inappropriate, biased, sexist, and discriminatory” content.
The Human Dimension of Intelligence
Chemero’s paper underscores that LLMs, lacking embodiment and constant interaction with human and environmental stimuli, do not possess knowledge in the same vein as humans.
He emphasizes that LLMs are not genuinely immersed in the world and lack an intrinsic interest in their surroundings, remarking, “This makes us care about our own survival and the environment in which we live.”
The core message conveyed is that LLMs exhibit apathy, as “they couldn’t care less,” unlike humans who are driven by concerns for survival and the welfare of their environment.