Written by 2:00 pm AI, Discussions, Uncategorized

**DeepMind’s Latest Innovation: A New Multi-Game AI Advancing General Intelligence**

A new Google DeepMind algorithm that can tackle a much wider variety of games could be a step towar…

Some of the most demanding sports ever played have been conquered by AI, each type tailored to address specific challenges. A new DeepMind engine, as per its creators, is pioneering a path towards more generalized AI by excelling in a wide array of games.

The benchmark of using sports as a measure for AI has been a longstanding practice. The defeat of chess grandmaster Garry Kasparov by IBM’s Deep Blue engine in 1997 marked a significant milestone in the field. Similarly, DeepMind’s AlphaGo’s victory over top Go player Lee Sedol in 2016 sparked a surge of interest in AI advancements.

DeepMind further advanced this progress with AlphaZero, proficient not only in chess and shogi but in various other games. However, AlphaZero was limited to perfect information games, where all game aspects are transparent to both players except for their opponent’s strategies, such as in Go.

Conversely, games involving incomplete information require withholding certain details from opponents, like in poker where players cannot see each other’s hands. While AI can now compete with experts in such games, their strategies differ significantly from algorithms like AlphaZero.

To achieve AI mastery across chess, Go, and poker, DeepMind researchers amalgamated elements from both paradigms. This breakthrough is believed to expedite the development of versatile AI systems capable of diverse tasks.

Traditionally, researchers tackling perfect information games employ tree search, exploring potential game progressions through various move sequences. AlphaGo enhanced this with machine learning, where the model learns from self-play experiences to refine its skills.

In contrast, game theory is utilized for incomplete information games to strategize optimal solutions using mathematical models. DeepStack, for instance, excelled in no-limit poker in 2016 but was specialized for that game. The integration of DeepStack and AlphaZero strategies birthed the new engine, Student of Games, proficient in both game types.

While excelling in specific tasks is common in AI research, the challenge lies in creating versatile AI for varied scenarios. The fusion of strategies in Student of Games signifies a crucial step towards adaptable algorithms for diverse environments, as highlighted in a publication in Science.

Despite these advancements, it is essential to temper expectations, as the AI operates within structured game environments with clear rules, unlike the complexities of real-world scenarios. Nonetheless, the convergence of distinct game strategies in one model hints at its potential as a blueprint for robust and versatile AI models in the future.

Visited 2 times, 1 visit(s) today
Last modified: February 23, 2024
Close Search Window
Close