Cover ERCIM News 57

This issue in pdf
(72 pages; 11,6 Mb)



Archive:
Cover ERCIM News 56
previous issue:
Number 56
January 2004:
Special theme:
Analysis, Planning, Diagnosis and Simulation of Industrial Systems

all previous issues


Next issue:
July 2004

Next Special theme:
Automated Software Engineering


About ERCIM News


Valid HTML 4.01!

spacer
 
< Contents ERCIM News No. 57, April 2004
SPECIAL THEME
 

Game Artificial Intelligence that Adapts to the Human Player

by Pieter Spronck and Jaap van den Herik


While the audiovisual qualities of games have improved significantly over the last twenty years, game artificial intelligence (AI) has been largely neglected. Since the turn of the century game development companies have discovered that nowadays it is the quality of the game AI that sets apart good games from mediocre ones. The Institute of Knowledge and Agent Technology (IKAT) of the Universiteit Maastricht examines methods to enhance game AI with machine learning techniques. Several typical characteristics of games, such as their inherent randomness, require novel machine learning approaches to allow them to deal with game AI.

Most commercial computer games contain computer-controlled agents that oppose the human player. 'Game AI' encompasses the decision-making capabilities of these agents. For implementing game AI, especially for complex games, developers usually resort to rule-based techniques in the form of scripts. Scripts have the advantage that they are easy to understand and can be used to implement fairly complex behaviour.

'Smart' game AI endows agents with intelligent tactical behaviour, able to outwit even the best human players. Unfortunately, even in state-of-the-art games the quality of the game AI is fairly low. Typically the agents are inflexible and make mistakes, caused by the static nature of their scripts. Reviewing game AI implementations in general, we find that developers deal with the low quality of game AI by pitting the human player against agents that are simply physically stronger, and not against agents that play the game intelligently.

Current research at the IKAT investigates methods of enhancing complex game AI with automatic learning capabilities. Agents controlled by adaptive game AI are able to correct their own mistakes, and can change their behaviour to deal successfully with previously unseen human-player tactics.

When game AI adapts during gameplay (which is called "online learning"), it can only learn from observing actual encounters between the human player and the computer-controlled opponents. In general not many such encounters take place over the course of a game. 'Regular' machine learning techniques (such as evolutionary learning, neural networks and reinforcement learning) are difficult to apply to game AI, because they need an inconveniently large number of observations. Moreover, they are not suited to deal with the large amount of randomness that is characteristic for commercial games.

Combat between adaptive AI (the white team) and manually developed AI (the black team).
Combat between adaptive AI (the white team) and manually developed AI (the black team).

One of the solutions IKAT introduced to the problem of online learning in games is the novel technique called 'dynamic scripting'. Dynamic scripting is an online learning technique that is computationally fast, effective, robust and efficient. It maintains several rulebases with domain knowledge, one for each opponent type in the game. These rulebases are used to generate new scripts for every new opponent encountered. When rules are extracted from the rulebase to form a new script, those rules that seemed to work well in earlier encounters have a higher chance of being selected than those that seemed to evoke inferior behaviour. The selection probabilities are updated after every encounter, which allows the rulebase to optimise quickly the generation of scripts that perform well, regardless of the tactics exhibited by the human player.

Dynamic scripting has been tested in the state-of-the-art commercial roleplaying game NEVERWINTER NIGHTS. It was used to control several different characters in a team that was pitted against a team of similar characters, driven by manually designed game AI (illustrated in the figure). Dynamic scripting was shown to be surprisingly successful, even against opponents that switched regularly between very different (but all strong) tactics. Even without any initial knowledge at all, dynamic scripting caused the team it controlled to outperform convincingly its opponents after about 30 encounters on average.

IKAT continues to investigate dynamic scripting, by applying it to different types of games, such as real-time strategy games. The research is also extended to other forms of online learning in games, such as learning using a data store that contains gameplay experiences as a model to predict the results of selected actions.

Because many game players find the AI of modern games unsatisfactory, using machine learning to create stronger game AI is a worthwhile pursuit. However, it should be noted that too strong game AI might not be entertaining, especially not for novice players. Since the aim of commercial games is providing entertainment, the game AI should scale transparently in accordance with the observed experience level of the human player. Adaptive game AI can provide that. So, adaptive game AI not only benefits the strong players because it gives them new challenges, but also the weaker players because they may achieve victory without the feeling that the game just let them win.

Link:
http://www.cs.unimaas.nl/p.spronck

Please contact:
Pieter Spronck, Maastricht University, The Netherlands
Tel: +31 43 388 3334
E-mail: p.spronck@cs.unimaas.nl

 

spacer