AI in human–computer gaming: Techniques, challenges and opportunities

AI in human–computer gaming: Techniques, challenges and opportunities

technology By Dec 29, 2023 No Comments

AI in Human-Computer Gaming: Techniques, Challenges and Opportunities

Human-computer gaming has a rich history and has served as a crucial tool for validating key artificial intelligence technologies. The Turing test, proposed in 1950, marked the inception of human-computer games to assess machine intelligence, sparking innovation in AI systems designed to challenge professional human players.

One standout example is the development of the checkers AI, Chinook, in 1989, which ultimately defeated the world champion, Marion Tinsley, in 1994. Another iconic achievement was IBM’s Deep Blue defeating chess grandmaster Garry Kasparov in 1997, marking a milestone in human-computer gaming history.

In recent years, a rapid surge in the development of human-computer gaming AIs, exemplified by DQN agent, AlphaGo, Libratus, and OpenAI Five, has demonstrated their capability to triumph over professional human players in specific games using advanced techniques, depicting significant progress in decision-making intelligence.

Evolution of AI Systems in Human-Computer Gaming

With the emergence of powerful techniques such as Monte Carlo tree search, self-play, and deep learning, AIs like AlphaGo Zero have been able to surpass professional go players, symbolizing a significant leap in the domain of large-state perfect information Games.

Additionally, OpenAI Five, employing self-play, deep reinforcement learning, and continual transfer via surgery, achieved the remarkable feat of defeating world champions in eSports Games, showcasing valuable techniques for complex imperfect information Games. This success has led to the natural question of the potential challenges and future trends in current techniques in human-computer gaming.

Exploring the Challenges of Current Techniques

A recent paper published in Machine Intelligence Research offers a comprehensive review of successful human-computer gaming AIs and delves into the challenges through a meticulous analysis of current techniques. The paper surveys four primary types of Games, including board Games, card Games, first-person shooting Games (FPS), and real-time strategy Games (RTS).

Researchers highlight key factors that pose challenges for intelligent decision-making, including imperfect information, long time horizons, intransitive games, and multi-agent cooperation. The subsequent sections of the paper elaborate on the games and their corresponding AIs, shedding light on the specific techniques employed.

Board Game AIs: Pioneering Techniques

The AlphaGo series, based on Monte Carlo tree search (MCTS), made significant strides in mastering the game of Go, defeating professional players and attaining superhuman performance. The subsequent versions, AlphaGo Zero and AlphaZero, expanded their mastery to chess and Shogi, showcasing a robust reinforcement learning algorithm.

With a focus on card games, DeepStack and Libratus emerged as pioneering AI systems that outpaced professional poker players in heads-up no-limit Texas hold’em. The subsequent foray into games like Mahjong and DouDiZhu presented new challenges for AI, yet notable successes were achieved with Suphx in Mahjong and DouZero in DouDiZhu.

First-Person Shooting and Real-Time Strategy Games: Complexity Unleashed

In the realm of FPS games, the unique dynamics of Quake III Arena in capture the flag (CTF) imposed challenges such as limited access to other players’ states and a prohibition on team communication. The learned agent FTW demonstrated remarkable human-level performance, marking a significant step forward.

RTS games, with their complex environments and large-scale battles, emerged as a fertile ground for human-computer gaming. The breakthrough of AI systems like AlphaStar and Commander, capable of competing against grandmasters and professional players in StarCraft and similar games, signified a new frontier in AI development.

Comparing and Summarizing Techniques

Reflecting on the current breakthroughs in human-computer gaming AIs, the paper categorizes the techniques into two main groups: tree search with self-play and distributed deep reinforcement learning with self-play or population-play. Details of representative algorithms and their applicability across different Games are thoroughly discussed.

Challenges and Future Directions

While considerable progress has been made in human-computer gaming, the paper identifies three key limitations of current techniques. Firstly, the specificity of AIs to particular Games or maps hinders their broader applicability. Secondly, the resource-intensive nature of training these AIs poses a significant hurdle, limiting access to advanced research.

Finally, the evaluation of AIs based solely on victories against a select group of professional human players raises questions about the true extent of their expertise. These limitations underscore the challenges and potential directions for future research in this domain.


This in-depth survey and analysis of the techniques, challenges, and opportunities in human-computer gaming AIs not only provide valuable insights for beginners looking to explore this exciting field but also serves as a source of inspiration for researchers seeking to delve deeper into the intricacies of AI in gaming. The ongoing evolution of AI systems in human-computer gaming offers a promising avenue for further innovation and discovery.

Source: phys

No Comments

Leave a comment

Your email address will not be published. Required fields are marked *