Written by 9:14 pm AI, Discussions, Uncategorized

### Evolution of AI: Comparing Self-learning Models with Traditional Artificial Neural Networks

New physics-based self-learning machines could replace the current artificial neural networks and s…

By harnessing natural methodologies within neuromorphic technology, a group of researchers at the Max Planck Institute has introduced a more energy-efficient approach to AI training. This innovative strategy, diverging from conventional neural networks, not only reduces energy consumption but also improves the efficiency of training processes. To showcase this advancement, the team is developing an optical microelectronic computer aimed at advancing AI systems.

State-of-the-art autonomous devices grounded in scientific principles have the potential to replace current artificial neural networks while conserving energy resources.

Artificial intelligence (AI) demonstrates impressive performance capabilities, albeit at a significant energy expense. The electricity usage increases with the complexity of tasks performed by AI systems. Scientists Florian Marquardt and Victor López-Pastor from the Max Planck Institute for the Science of Light in Erlangen, Germany, have pioneered a more efficient approach to AI training. Their unconventional method, diverging from mainstream artificial neural networks, leverages physical processes.

The energy consumption associated with training advanced AI models, such as the sophisticated AI Chatbot developed by Open AI, the company behind GPT-3 technology, remains undisclosed. GPT-3, known for its language processing capabilities, reportedly demands a substantial amount of energy for training, equivalent to the annual energy consumption of approximately 200 German households with three or more residents, according to statistics from the European organization Statista. Despite its proficiency in predicting word associations based on datasets, such as linking “deep” with “sea” or “learning,” GPT-3 allegedly lacks a comprehensive understanding of the underlying concepts.

Neural network servers based on microelectronic technology

In recent years, various research institutions have explored a new approach to data processing known as microelectronic technology to reduce computer energy consumption, especially in AI applications. Unlike artificial neural networks running on traditional electronic computers, microelectronic technology offers a distinct paradigm. In this model, electronic computers function as hardware while the software, particularly the algorithm, mimics the operations of the human brain. The key difference lies in the separation of computing and memory functions, mirroring the sequential processing actions of neural networks.

Florian Marquardt, the director of the Max Planck Institute for the Science of Light and a professor at the University of Erlangen, explains that the data exchange between components, involving the training of hundreds of billions of parameters or synapses with up to one gigabyte of data, results in significant energy consumption.

If the human brain operated with the energy efficiency of silicon transistor-based computers, evolutionary progress might have been hindered due to overheating concerns. Unlike computers that perform tasks sequentially, the human brain processes thoughts simultaneously. The microelectronic counterparts to biological neurons are being investigated in various systems globally, including optoelectronic circuits that utilize light for computations instead of electrons, serving as both valves and memory units.

Self-learning physical machines optimizing individual neurons

In collaboration with Victor López-Pastor, a researcher at the Max Planck Institute for the Science of Light, Florian Marquardt has developed an effective training method for microelectronic servers. Describing the concept as a “self-learning physical machine,” Marquardt advocates for a training process that inherently optimizes the machine’s parameters without requiring external feedback, unlike traditional artificial neural networks that depend on feedback mechanisms to adjust neural connections.

Marquardt highlights that their approach streamlines the training process, saving both time and energy by eliminating the need for external feedback. The methodology is designed to accommodate various physical processes without mandating specific knowledge of the underlying mechanisms, provided the process meets certain criteria, including reversibility and non-linearity to handle complex transformations between input data and outcomes.

An optical microelectronic system serves as a suitable testbed for this concept, where recoverable, non-linear processes are prevalent. Collaborating with an experimental team, Marquardt and López-Pastor are exploring the development of an optical microelectronic system that manipulates information through controlled interactions with light waves. The ultimate goal is to achieve a self-learning physical machine, paving the way for improved neural network capabilities with expanded synapses and data processing capacities.

Envisioning the deployment of efficiently trained neuromorphic systems to surpass traditional computers, Marquardt foresees a future where self-learning physical models play a crucial role in advancing artificial intelligence applications.

Visited 1 times, 1 visit(s) today
Last modified: February 21, 2024
Close Search Window