**Alpha Beta Cut Off and Pruning in Artificial Intelligence (AI)**

**Alpha Beta PRUNING**

This is a way to reduce the number of nodes searched by the **Minimax strategy**. It is obviously bad for current players. Because we need to limit it, we will not shorten and delay the time required for searching.

The exact implementation of **alpha-beta** tracks the simplest movements on both sides as it moves through the tree.

One of the most sophisticated of all AI search algorithms is **alpha beta pruning**. The idea similar to the branch-and-bound method is that you can determine the minimax value of the roots of the game tree without searching all the nodes with the search frontier.

Only labelled nodes are generated by the **algorithm**, and thick black lines indicate pruning. MAX moves at the square node, and MIN rotates at the circular node. Searching is done depth-first to minimize the memory requirements and nodes are evaluated only when necessary.

The first node and f are evaluated statically at 4 and 5 respectively, Next, node h is evaluated at 3. Therefore, since the minimum value is 3 and the correct child value is unknown, the value of the parent node g needs to be 3 or less. Therefore, set the level of node g to 3 or less. The value of node c must be 4. This is because the maximum value of node c is 4, which is less than 3. Since we have determined the minimax value of node c, we do not need to evaluate or generate the next value.

**Must to Read**

**What are the Different Applications in Machine Learning Algorithms**

As the sibling of node h, after evaluating nodes k and l with 6 and 7 respectively, the backed up value of their parent node j is 6, which is the minimum of these values. This indicates that the **minimax value** of node I am a maximum of 6, which is an unknown value of the correct child, so it must be greater than or equal to 6. Since the value of node b is the minimum 4 and 6 values, it must be 4. Therefore, we achieve another cutoff. The right half of the tree shows an example of deep pruning. After evaluating the left half of the tree, we found that the value of root node a is greater than 4 (the minimum value of node b).

When node q becomes 1, the value of its parent node 9 becomes 1 or less. Since the value of the route is 2 or more, the value of node m is the value of node n and its siblings, and if the value of node n is 2 or less, the value of node m must also be 2 or less. This causes the sibling of node n to be truncated. The value of the root node a is 4 or more. Therefore, we calculated the minimax value of the root of the tree to 4 by generating only 7 out of the 16 leaf nodes in this area. Since alpha – beta pruning performs the minimalx search while pruning most of the tree, its effect is to allow deeper search with the same amount of calculations. This raises the question of how much alpha – beta improves performance. The best way to characterize the pruning algorithm’s efficiency is from the perspective of its effective branching factor.

The effective branch factor is the d the root of the frontier node that must be evaluated in the search to the depth d, and the upper limit is d. The efficiency of alpha – beta pruning depends on the order in which you encounter nodes at the discovery frontier. For any set of frontier node values, the same ordering of values exists so that alpha – beta does not perform cutoff at all.

In that case, the effective branch factor decreases from b to b ^ 1/2, ie the square root of the brute force branch factor. Another way of looking at the case of perfect ordering is that for the same amount of calculations the search tree grows exponentially with depth so that at twice the depth without alpha-beta pruning Can be searched.

**Must to Read**

**Artificial Neural Networks 2019**

There is random ordering between worst ordering and complete ordering, which is the average case. Under random ordering of frontier nodes, alpha beta pruning reduces the effective branching factor to about b 3/4. This means that you can search up to 4/3 of the depth in alpha beta, and the search depth will be improved by 33%. In practice, however, the effective branch factor of alpha – beta is closer to the best case of b 1/2 due to node ordering.

The idea of node ordering is that instead of generating the tree from left to right, it is possible to sort the trees based on the static evaluation of the internal nodes. In other words, the children of **MAX nodes** are expanded in descending order of their static values, and the children of **MIN nodes** are expanded in ascending order of their static values.

**Alpha Beta CutOff ****in Artificial Intelligence ****(AI)**

**Alpha CutOff**

Calculate the minimum value at the current vertex and further calculate the maximum value at the parent. When the minimum value calculated at the vertex becomes smaller than the maximum value calculated by its parent, it stops growing the current vertex. You will be able to see this incident in Figure 28.

Since the child whose weight is 4 has already been created, the caption ≤ 4 is displayed on the vertex to be expanded. The explanation is that the weight of the current vertex is irrelevant from this point so it is not necessary to generate all the opposite children of the current vertex.

**Beta CutOff**

In Figure 30, when maxDepth = 3, the tree generated by the Minimax algorithm is displayed for tic-tac-toe. The part of the tree that the algorithm should not generate for alpha beta pruning is surrounded by a red frame. If you count them, we would like to generate only 20 vertices, and you have to generate 15 vertices. We managed to cut 57 trees somehow.

**Must to Read **

##### Problem Reduction AO* Algorithm | AND OR GRAPH

**Minimax Algorithm**

**Minimax is a recursive algorithm** used to select the best move for a player, assuming that different players are also playing optimally. It is used in games that connect games for 3 eyes, chess, Isola, checkers, and many other 2 games. Such a game is called a complete information game because you can see all the possible moves of the selected game. Since the movement of the opponent cannot be predicted, there may be a game for two people who are not perfect information like scrabble.

It is similar to predicting when playing games.

Minimax is therefore called because other players will choose strategies with the best loss, which will help minimize the loss.

### Minimax Algorithm (Alpha Beta Pruning)

The effectiveness of the alpha – beta procedure depends on the order in which the successors of the node are examined. If you are lucky, always consider the nodes from the high score in the maximum node to the low score from the low score of the MIN node to the high score.

In general, in the most advantageous situation, you can see that alpha-beta search opens the same number of leaves as the mini-max of the game tree twice as deep.

**Minimax Algorithm Alpha Beta Pruning Example**

An example of **alpha beta** search is shown below. **Alpha – Beta Algorithm** This algorithm manages two values, alpha and beta. Initially alpha is negative infinity and beta is positive infinity. As the recursion progresses, the “window” becomes smaller.

When beta becomes smaller than alpha, it means that this position cannot be the result of the best play by each player, and hence there is no need for additional investigation to be done.

### Alpha Beta Pruning (Disadvantages-Problems)

Therefore, using Alpha-Beta pruning has many advantages in this regard, it is more space and time complicated than the original minimax algorithm. Therefore, you probably wonder if this is the best. Actually, it is not. In addition, it does not solve all problems related to the original **minimax algorithm**. Below are some suggestions of a list of some disadvantages and a higher way to achieve the goal of choosing the best move.

Evaluation of node usefulness is not usually strict, but it is a rough estimate of position values so that large errors can be associated with them.

Let’s assume that the player chooses the best movement forever according to the situation. If the evaluation function is not that good, there is a possibility of choosing bad movements. Please refer to figure five.13 in the text.

In most cases, searching the entire game tree is not realistic. Set the depth limit. A remarkable example is an associate with a branch factor of 360. Even with **alpha beta pruning**, you can not read ahead three or more times in the game tree.

**Alpha – beta** is designed to choose good movement, but it further calculates the value of all legitimate movements. A better way is to use what is called the utility of node extension. In this way, good search algorithms can choose nodes that have high utility to extend (these can hopefully lead to higher movements).

This makes it possible to make quicker decisions by searching for smaller decision spaces. The extension of these functions is the use of another technique called target-oriented reasoning.

This technique focuses on having a specific goal in the mind that capturing the Queen in chess. So far, no one successfully combined these technologies into a fully functional system.