19 Game Playing Algorithms

Bhushan Trivedi

epgp books

 

Introduction

 

Game playing attracts all. It has been a major area of research and commercially successful as well. We will look at some of the components needed by game playing algorithms here. Obviously game playing algorithms require mimicking human game playing processes as these programs directly interacts with humans and humans expect such programs to play like them. We have referred to a chess playing program many times during our discussion so far.  There are quite a few other games which conventional computing systems offered to users including solitaire, treasure hunt, etc. In the Internet era, there is plethora of multi-player games available which users can play over Internet. Our interest, obviously, is into finding algorithms for coding these games. Most, if not all, solutions require some basic ingredients to work. For example, one of the components of any two-player or multiplayer game is to generate only plausible moves. A chess player does not think of all possible moves but only those moves which make sense. (This is quite similar to our discussion on sparse neighborhood function). Another important idea is to have some knowledge about the game domain which help the solver determine which moves are better than others. A kind of heuristic function is possible to be drawn from that information. One can clearly see that the things that we discuss here, the plausible move generation process and the selection and execution of heuristic function are based on the domain related information. For chess the plausible move generatoris based on  how important each  piece is.  For example there is a move involving queen and another involving a pawn. Unless there is some really queer case, the move involving queen is much more important than the move involving a pawn.

 

In case of game of cards, for example bridge, typical sequence of cards is important than others. Thus plausible move generator is different for different games and cannot be generalized. Similarly the heuristic functions are based on the domain information of the game and thus, are different for different games too. On the contrary, there are a few things which can generalize. For example, how one can search in the state space for a sequence of moves for winning, is common for cards and chess and many other similar games. We will have to start with a start node, move through state space depending on moves possible, choose the next move based on who is playing (we or opponent), and try to find out state which indicate win for us and if not possible to win, avoid possibility of opponent winning the game (going for a draw). Though it is not possible or feasible to find complete sequence, it is possible to at least say which sequence is better than others. That means which sequence of moves has better chances for winning. When we are discussing about game playing algorithms, we are interested in discussing the algorithms which can in general be applied to game playing domain irrespective of the game we are modeling.

 

Characteristics of game playing algorithms

 

Game playing is an interesting and challenging domain with few unique characteristics. Let us try to elaborate. First, almost all games have quite structured rules to play. For example (once again) chess, we have complete set of rules describing which piece can be moved in which situation, which positions allow to capture a piece of an opponent, when can you say that the game is over, all are clearly defined. Pick up another game like backgammon or game of cards or any other game, one can always list all rules of playing. That helps use describe that came in a formal way including the state space and the rules.

 

Second, success and failure are well defined, that means we can easily ascertain who has won the game. Thus it is easier to decide which move is leading to win or loss.

 

Third, the problem of game playing seems simple at the first sight but like other AI problems, plagued by combinatorial explosion. Fortunately, studying how human players play, many good heuristics are derived and used for game playing programs. Even with best possible heuristics, it is impossible to explore the complete game tree in most cases. (The humans does not do that either!) Thus it is impossible to generate game playing programs for all games except some trivial programs (for example tic-tac-toe) which can produce complete move sequence. One cannot guarantee success in game playing programs though; only can try to increase the probability of winning. However, that has made the domain even more interesting and challenging1.

 

Game playing programs can be placed anywhere over the continuum between two extremes. On one end, we do not have any search, it is plain vanilla generate and test. We just pick up a random node from the game state space and check it for a solution. Both components that we have discussed earlier, the generator which generates plausible moves and the heuristic function, both are almost nonexistent for this case. In another extreme, the algorithm needs both of them. Multiple plausible moves are generated and the best looking at the heuristic value is chosen after exploring some levels of the tree. In the game playing parlance, a level of the game tree is known as ply and thus one needs to explore multiple plies before judging how good a given move is. Thus the other extreme requires one to search through a game tree to find winning move sequence or move sequence with higher probability of winning.

 

It is important to understand the difference between a plausible move generator and a legal move generator. In a program like Chess, there are many legal moves, i.e. moves which one can play legally. A plausible move generator generates only those moves which are interesting and useful for the player. In a way, it only generates a subset of legal moves which are useful for further exploration. How the move generator does this is depends on many factors including how the game is played, how one move can be considered significant or otherwise based on domain knowledge, situation the player is in the game right now is and so on. Such a generator itself is a challenge to be programmed. Another important point is that a plausible move is also different that a winning move. A plausible move may lead to worse situation than the player is currently in  and  may lead to a loss. The plausible move just is more important than others so one must carefully explore it. The moves which are legally possible but not generated by plausible move generator is generally not important to be explored. The time saved by avoid exploring those moves is utilized in exploring the tree deeper. If plausible move generator is not used, we would not be able to probe deeper as number of moves and branching factor won’t allow us to move beyond a point.

 

“1  In fact game playing programs are very rewarding commercially also. Mobile and multi-player games are really high in demand.”

 

One more important thing to note is that though search is one of the most important components of a game; sometimes the other (direct, specific, table based lookup) methods are more appropriate. When there are a few alternatives, and for each alternative there is a fixed solution, one can forgo conventional AI search method based on state space and take a table based search method. For a given move, one can search the database and choose the next move. The advantage of this mechanism is that it is very fast. In the game of chess, for example, the time that you have to take a particular move is always limited and it is always good to save time when we can. This is usually the case applied in the beginning and ending part of the game. In general, good chess playing programs use such moves in the beginning and end part of the game. Whenever an opponent plays a move, the chess board position is fed into the database. The database use that board position as a key to generate another board position indicating its move as response which the chess playing program just output as its next move. In game playing parlance, they are known as book moves.

 

It is better if we can search through the entire game tree and reach to a goal node, but it is impossible in most cases. For example in an average chess game, the branching factor is around 35 while the average number of moves taken by a player is around 50 (so 100 for both the players). That means on an average, a chess game tree contains 35100 moves. Such a staggering number of moves are impossible to be explored by the fastest computers humans have ever designed so we must curtail our search process after exploring a few plies.

 

Another important component of the game playing algorithm is the heuristic function. It is known as static evaluation function in the game playing parlance. The static evaluation function takes the board position of the game as an input and generates a number which indicates the likelihood of that board position to lead to win. When the game playing algorithm explores the nodes, it applies the static evaluation function to all nodes and chooses the best from it to explore based on static evaluation function value. When a game playing algorithm is mulling over to take the next move, the current board position is input to the algorithm, the algorithm tries to explore the complete game tree considering that board position as root. Once the game tree is constructed, most promising move from the game tree is selected based on static evaluation function output for the nodes explored.

 

History

 

Cloud Shannon published a paper in 1950 to  indicate the merits of how a selective search (using plausible move generator) can be helpful. In fact he also mentioned how one can write a chess playing program. Alan Turing, after a few years described his own version of chess playing program but never actually triedto build that himself.  Arthur Samuel in 1960 build a program called Checkers (which was designed to play checkers or draughts) which was a first game playing program of any significance. The program had an ability to assign credit to moves, in a way that a good move (which led to win) get more while a bad move (which led to loss) get less credit. The program in next iteration tries to use updated information to play better; i.e. choose moves with better credit. The program was able to learn over a period of time become better and better. The effect of continuous learning was able to escalate program’s ability to such a level that it was able to beat Samuels himself after some time. A computer Deep Thought could beat a grand master, Briton’s David Levy in 1989. The most significant achievement of chess playing computer program was a success of a computer program by IBM known as Deep Blue which could beat Gary Kasparov in 1996, world champion then2. More recently, a program called X3D Fritz has come on the Chess scene. X3D Fritz has played four games against Kasparov in November 2003. The scores were 1 win for Kasparov, 1 win for X3D Fritz and two draws. The game is played on a computer with a 3D chess board when viewed through special glasses.

 

In fact the game playing domain is getting so involved that Stanford University has begun a General Game Playing project in which they aim to build and improve a game playing platform which others can use to build games. They also have defined a Game Description Language which helps designers to write game playing rules. All in all, they have made the life of designers of game playing programmers easy. They can now concentrate more on game playing logic rather than the game playing algorithm part. Stanford University also organizes competition for building games and announces winners every year since 2005.

 

There are quite a few research papers published in recent past about not only how the game can be played by computer programs but how to make them learn with experience and grow better and better with every game played. All in all, this area is booming with newer and newer algorithms and solutions. However, basic idea about game playing remains the same. The only significant addition, in recent past, is on solutions for multi agent games played over the Internet. Apart from  general game playing problems, such algorithms need additional issues to address. Event driven processing (process the game board with input from any of the player or some other event like timeout) , concurrency control (multiple players should not render the game board in inconsistent position, for example in a car racing game, two players simultaneously cannot park their car to a parking area for one car. The parking area may be empty when both the players look for it and both of them may simultaneously try to park their car there but only one should succeed), and many other issues. We will refrain from addressing those issues during our study but web is a good resource for learning about multi agent games. Many research papers and game playing platform white papers for such games.

 

“2  The program was designed to play with Kasparov, it contained a huge database of many previous games Kasparov played, his strengths and weaknesses and so on. The IBM scientists worked for 3 to 4 years in building a profile which can beat Gary. In a way it is a tribute to that world champ that one needs to have this much amount of effort to beat him.”

 

Types of Games

 

An interesting categorization is drawn in [1] about types of games one would like to program. We only describe this part in brief here. One can refer it for further information.

 

The first classification is based on number of players. A two person game (like chess) is an important class of game in which most games classified into. In this game, the players take turn and thus every player has chance to play alternate moves. An interesting consequence of this is that a player may try to maximize his chances of winning while the opponent in the next move may try to do exactly opposite and minimize the chances of the player’s winning. Thus a conventional search algorithm won’t work here.

 

Another way to classify the games is called zero sum games vs positive and negative sum games. Zero sum games are where the gain by one is loss by the opponent in equal amount. If there are multiple opponents, the gain by one is equal to the accumulated loss of everyone else. That means the total credit always remains the same. Any game involving gambling of some sort is an example. Loss by one is gain by other. Some games are not like that. In a car racing game, a collision may take both players out and nobody wins, and thus overall credit is less than earlier. A market game based on some prizing scheme may be negative sum game if all of the players do not do good business; may be a positive sum game if all of them do a good business.

 

Another dimension to categorize games where you can have complete information and the decision that you make is based on that complete information; and not otherwise. Chess is an example of complete information game. When a player plays a move, he is completely aware of the position of each piece of the board position. Nothing is hidden or unknown to him. There are many games which do not act that way. For example, almost all card games involve some guessing about what the opponent card values are. The players do not have complete information about either other players or the heap from where they draw cards. The players have clear idea about number and type of cards but not about what other players hold in their hands. The players make decisions based on available information, reactions from other players, and help from partners (some card games have partners who work in tandem to win). They also have some past information about how each player’s favorite method of playing and how they react to situations. They may also use that in planning their moves.

 

Yet another dimension is being deterministic or not. If a player is not allowed to make a deterministic move, for example every game which involves throwing a dice and act according to the value obtained, are probabilistic and not deterministic. Multi player games, where the move depends on other players (for example car racing) also depends on how others play and thus also is not deterministic. On the contrary, chess is a deterministic game. You can play a move exactly knowing what it will result into, without relying on anything else. For example when you move a pawn one row forward, it moves one row forward only and not two rows. Unlike that, when you play a card, whether you have a hand or not depends on what others play. Your move alone cannot determine the outcome.

 

Game trees

 

The game trees are search trees with each level is a move from one of the players. We assume a few things before proceeding further about game trees in this module.

 

1. Every player is allowed to play only one move in a single iteration. If he is allowed to take multiple moves in special cases, we assume a single move consisting of cumulative effects of those multiple moves.

2. There are only two players. In a typical two player game, each player owns one ply (level) at alternate level. Again, for a multiple player game, the very solution can be extended.

3. The games are complete information games. Thus it is possible for us to apply static evaluation function to probable moves and ascertain complete information about the board position to be used in making a decision.

4. Both opponents (players) goals are exactly opposite to each other. When one player makes a decision at one layer, next layer is second player’s prerogative. Thus, what the first player tries to maximize at one layer, the second player tries to minimize at next layer. That is why they are sometimes denoted as Max and Min layers.

 

As mentioned earlier a game tree contains two layers alternatively belong to one of the player. Sometimes, Max layer is indicated by square and min layer is indicated by round nodes. Sometimes Max is indicated by a triangle (5) while min is indicated by inverse triangle (6). In many cases, the nodes are not differentiated and all layers contain same type of nodes. We will look at this part in more detail later in the next module.

 

A game tree begins with a root node which belongs to the player who is taking a move and thus, it is a max layer. He can choose one of the children of that ply.  An example game tree is depicted in figure 18.2. This game is known as tic-tac-toe and is very popular game. I hope you know how to play but if you cannot, here is a brief description.

 

This game is played by two players, one mark his move by X while the other by 0. There is a matrix of 3 X 3 which is completely empty in the beginning. When the game begins, both the players take turn in marking one of the cells of the matrix. When a player is able to mark three consecutive cells before other player, he is a winner. Three consecutive cells can be marked in total 8 different ways; three columns, three rows and two diagonals. Figure 19.1 shows eight different ways one can  win. For simplicity the opponent moves are not shown. The aim of the player is to reach to any one of such states. Another important aim of a player is to make sure that opponent cannot achieve similar state and win (so you lose). In fact it is also possible to reach to a state where none of the players has three consecutive marks and none wins. That is known as draw.

 

Figure 19.1 eight different ways one can fill the cells to win

 

The game tree depicted in 19.2 is not completely drawn but still can convey the idea. Total six plies of the game tree are drawn and each player has taken three moves each. At each ply we have explored only one child for simplicity. Other children can be explored in similar manner. The first player marks his choice using cross (x) while the second player (the opponent) uses zero (0) to indicate his choice. The first player takes a move at level 1,3,5 and so on while the opponent takes a move at level 2,4,6 and so on. The moves taken by first player (who is under consideration right now) are Max moves while the moves taken by second player (the opponent) are Min moves. It will soon be clear why we are naming them such.

 

Look at level 1, this is where the player starts his move from the left top corner. In fact he has total 9 possible moves out of which he chooses one. We have not shown other 8. In the next level there are 8 possible choices for the opponent. He chooses one which is shown in the figure. Other 7 nodes are not explored. In a real game tree all choices are explored.

 

Each node in this game tree is a 3 X 3 matrix. This matrix represents the tic-tac-toe board. Each level is numbered as well as indicated if Max or Min.

 

Suppose you are a player. You would like to win. Thus you would like to reach to the state encircled in the figure 18.2, named as W (win) written just next to it. (Here we have all X in vertical line in left most column). In fact if we draw complete game tree there are total 8 different possible states which we can mark as W as mentioned in figure 18.1. Similarly, the opponent wants to reach to state marked by double circle (where he has three consecutive 0s) and named as L. He also has more than one such state but not always as many as the first player. For example the case depicted in 18.2 where the first player has already occupied left top corner, three choices for the second player is simply not possible for winning. Closely observe the content of the figure 18.3. Three options indicated by 0 instead of 0 are not available as the left top corner is occupied by the first player and we cannot have three consecutive cells marked using that corner.

 

Let us probe further to understand. We would like to reach to the state mentioned as W in figure 18.2 (where we have three cross covering the leftmost column). We would like to pick up each node in a way that we can reach to that node by the time 5th move is taken. Let us assume that the game has reached to 3rd level. We would like to pick up move mentioned as 2 to reach to the state W. Can we do that? We cannot. It is the opponent’s move. He will have to decide that move. If he is not a novice with the game, he won’t choose that move as he can understand that if he does so, you can win. He, instead, would choose move 1, which will prevent you to win (and also give him a chance to win if you forget to block the central cell in the next move).

 

When there are contrasting goals at each layer, it is impossible to apply search methods that we have used in past. We cannot optimize every move.

Figure 18.2 a partial game tree for a game Tic-Tac-Toe

Figure 18.3 three less choices for player 2 to win

 

Thus however simple this game looks like, you need to devise some other algorithm for coding it as the conventional search algorithm that we have used so far does not really work. What do we need to change here? We want to build the search algorithm which works differently at alternate levels. On odd levels(1,3,5.. ) etc, the algorithm should act like normal search algorithm and pick up the state which maximizes our chances of winning. For all even levels (2, 4, 6…), we will have to remember that it is our opponents turn so we need to pick up the worst move for us (as opponent is likely to choose that). Thus we will have to choose Max state at one level and Min state at another level. Now you can see why they are called Min and Max levels.

 

Summary

 

The process of game playing using computer based programs attracted users and researchers alike since years. There are many attempts to code games and many are successful. Some has strong enough to beat even world champions. The game playing presents a very structured and programmable domain for us. Games can be classified in many ways but most games are two player, deterministic and with visible world and complete information. The biggest challenge that the games throw at the designer like other AI problems is combinatorial explosion. The challenge to generate moves in real time is met by two important components of every game that is modeled seriously as a computer program; a plausible move generator and a static evaluation function which can suggest which move is better than others.

 

Apart from these two important components, a search algorithm which looks at game tree and finds out the best move from available moves is also needed. A game tree consists of Min and Max levels. Max level is played by the player and it is where the chances of winning is to be optimized. Min level is played by the opponent where the chances of win for the players are minimized. It is clear that we need a different search algorithm which can address this need.

you can view video on Game Playing Algorithms