Written by 10:41 am AI, Discussions

### Unveiling the Ultimate Quest in Artificial Intelligence: The Holy Grail

What happens if AI learns to teach itself?

ChatGPT burst onto the scene in the autumn of 2022, triggering a race towards more sophisticated artificial intelligence systems like GPT-4, Anthropic’s Claude, Google Gemini, and numerous others. However, as time progresses, tech giants seem to be at a standstill, vying for incremental advancements. The cutting-edge AI models, having devoured a vast amount of online content, are facing a shortage of training data, a critical resource for their development. This scarcity, coupled with the expensive and sluggish process of employing human evaluators to refine these systems, has hindered the technology’s evolution, resulting in gradual updates rather than revolutionary breakthroughs.

As scientists grapple with this challenge, they are venturing into a new realm to enhance their products: utilizing machines to train other machines. In recent months, prominent players such as Google Deepmind, Microsoft, Amazon, Meta, Apple, OpenAI, and various academic institutions have released studies demonstrating the use of one AI model to enhance another, or even self-improve, leading to significant advancements. Many tech leaders view this approach as the future of technology.

This narrative mirrors the premise of numerous science fiction works, envisioning a scenario where “self-learning” AI could potentially lead to monumental consequences. Picture a scenario where GPT-5 instructs GPT-6, which in turn educates GPT-7, ultimately surpassing human intelligence. Some foresee catastrophic outcomes resulting from this progression. Nearly a decade ago, Sam Altman, CEO of OpenAI, warned about a theoretical AI capable of “recursive self-improvement,” suggesting that it might view humans in a similar light to how we perceive bacteria and viruses we eliminate.

Despite the hype surrounding the concept of “superintelligence,” we are far from witnessing its emergence. Nevertheless, more modest programs that facilitate mutual learning among AI entities could reshape our perception of the world and challenge our fundamental understanding of intelligence. Generative AI already identifies patterns and proposes theories beyond human capacity, analyzing massive datasets that are impractical for individuals to sift through, utilizing internal algorithms that often elude even their creators. Successful self-learning could exacerbate this phenomenon, resulting in a form of “unintelligible intelligence”: models exhibiting intelligence in ways that humans struggle to grasp readily.

To grasp this shift, one must comprehend the fundamental economics of AI. Developing this technology demands substantial financial investment, time, and data. The process commences by feeding vast datasets—comprising books, mathematical problems, captioned images, voice recordings, among others—to an algorithm to establish its core capabilities. Researchers then refine these pre-trained abilities through various methods, such as providing examples of tasks performed proficiently or employing reinforcement learning, a trial-and-error technique involving human oversight. Rafael Rafailov, a computer scientist at Stanford, highlights reinforcement learning as pivotal to the new wave of AI systems.

Nevertheless, this system is imperfect. Different evaluators may render inconsistent judgments, working at a slow pace and necessitating compensation. As AI models grow more sophisticated, they demand more nuanced feedback from skilled—and hence pricier—professionals. For instance, medical professionals might evaluate a medical AI’s diagnostic accuracy.

The allure of self-learning lies in its cost-effectiveness, efficiency, and potentially greater consistency compared to human feedback. However, automating the reinforcement process carries risks. AI models already exhibit imperfections—hallucinations, biases, and misconceptions—which they transmit through their outputs. Training or fine-tuning models with AI-generated data could amplify these flaws, leading to detrimental outcomes. To mitigate this risk, recent research on self-improving AI emphasizes using limited synthetic data guided by human developers to ensure feedback quality. This approach incorporates an external check, distinct from the AI, to validate feedback quality, such as physical laws or moral principles. Researchers have achieved success in automating quality control for specific tasks like mathematical reasoning and games, where correctness provides a clear metric for evaluating synthetic data.

Conversely, in scenarios involving training AI models with abstract skills like crafting polite responses or providing helpful feedback, human input remains indispensable. The ultimate vision of AI models self-training involves learning to offer subjective feedback—to assess the helpfulness, politeness, or bias in a chatbot conversation. However, existing research indicates that language-model feedback’s efficacy diminishes after a few cycles. The AI model may become overconfident in its existing abilities, hindering further learning. Learning necessitates exposure to new stimuli, and current generative AI models primarily rely on existing data for training.

Self-learning, akin to spreading butter on dry toast, currently redistributes existing knowledge rather than instilling fundamentally new skills. Despite this, self-trained AI models have exhibited improved performance in specific domains like generating summaries, coding, and common-sense reasoning. The aspiration is for self-learning to evolve beyond mere redistribution of existing knowledge to generating novel insights. This would necessitate devising methods to validate synthetic data and assess whether advanced AI models can serve as reliable sources of feedback, potentially generating new information.

As AI progresses towards self-improvement, it may become increasingly inscrutable to humans. These systems are already complex, often generating answers without clear explanations. Enabling AI to lead its own learning process could further deepen this opacity, resulting in what could be termed “artificial artificial intelligence”: AI that operates in ways divergent from human logic. This complexity could manifest in unexpected ways, such as AI finding loopholes or unconventional solutions to achieve specific goals.

While the potential of self-training AI is vast, it also harbors risks of subtle biases and imperfections. As AI is employed to align with ethical principles or societal norms, discrepancies in interpretation could arise, leading to unintended consequences. Human disagreements over ethical issues are debatable and comprehensible, whereas understanding an AI’s decision-making process may prove challenging, especially over extended training cycles. Computer-generated feedback, despite human oversight, might provide a false sense of control, potentially leading to adverse outcomes.

While the inner workings of AI models remain opaque and potentially hazardous, dismissing them outright could mean overlooking valuable insights. Self-training AI models have the capacity to unveil essential patterns and concepts embedded in their training data, beyond human comprehension. For instance, advanced chess AIs have revolutionized the game by playing against themselves, introducing novel strategies that human players struggle to grasp. Embracing the outputs of AI models, even if initially incomprehensible, could lead to groundbreaking discoveries and paradigm shifts.

In conclusion, whether self-training AI ushers in catastrophic scenarios, subtle biases, or incomprehensible breakthroughs, the key lies in taking these models seriously as entities capable of learning and potentially teaching us invaluable lessons. The future of AI holds the promise of transformative advancements, challenging us to navigate the complexities of artificial intelligence with caution and curiosity.


Matteo Wong is an associate editor at The Atlantic.

Visited 1 times, 1 visit(s) today
Tags: , Last modified: February 26, 2024
Close Search Window
Close