### Nash Equilibrium

This topic contains 1 reply, has 2 voices, and was last updated by Bjørn 2 years ago.

## Nash Equilibrium

By Jonna102

I’ll be writing up some texts on basic concepts in game theory, specifically as it applies to poker, while also giving a little bit of context from the broader theory of games. The first thing we’ll want to understand well is the Nash equilibrium. First, let’s learn a bit about games in general.

**Classifying Games**

In a broader context, game theory is the study of strategic decision making. The field studies mathematical models of conflict and cooperation between intelligent rational decision makers. A game, in this context, is the interactive exchange of strategic decisions between two or more decision makers (players). There are many different types of games that are studied, and here are some ways to classify them that are commonly used:

- Number of players
- Simultaneous vs sequential games
- Zero-sum vs nonzero-sum games
- Symmetric vs non-symmetric games
- Cooperative vs non-cooperative games
- Perfect vs imperfect information
- Combinatorial games
- Finite vs infinite games
- Discrete vs continuous games
- … and so on

For example, a game like chess is a two-person, zero-sum, sequential, non-cooperative game. Is it symmetric? Actually it’s not, since the strategies available to the white player are different from those available to black. However, if we first were to flip a fair coin and let the outcome decide who plays white, then chess becomes a symmetric game. Chess is also considered a combinatorial game, and a game with perfect information. The full game state is known to both players.

It works similarly for poker. Poker is an n-person, zero-sum, sequential, non-cooperative game and is symmetric assuming that all players get to play all positions equally often. However, in any given hand, the game is non-symmetric. Poker is also a game of incomplete information, where none of the players know the full game state (i.e. the hole cards of the other players). Poker is also considered a game of chance, and typically not considered a combinatorial game, although there is plenty of combinatorics in poker for sure.

Game theorists are traditionally interested in game situations where all players are playing their best possible strategy. This is often done using the concept of equilibria, and the Nash equilibrium in particular. While there are games that have no Nash equilibria, and it would be easy to give some examples, the class of games that is relevant to poker are guaranteed to have Nash equilibria, so let’s focus on that.

**Nash Equilibrium**

It is easy to state what a Nash equilibrium is:

*A Nash equilibrium is a set of strategies, one for each player, such that no player has incentive to change their strategy given what the other players are doing.*

However, grasping the full meaning of this statement may take some consideration.

One way to visualize it is through a payoff matrix. Consider the following, two players can choose between two strategies, A and B. The payoffs are indicated in the matrix, with player 1 payoffs shown first. It turns out that there are two equilibria in this game. P1:A, P2:A is an equilibrium because if either player changes to play strategy B, they get a lower payoff. P1:B, P2:B is also an equilibrium, and this is perhaps more surprising. But consider Player 1 playing strategy B and receiving 1. Switching to strategy A (while player 2 continues to play B) would result in a payoff of zero, and it’s the exact same situation for Player 2. This game is known as the stag hunt in the game theory community.

So we learn two things here. One is that there can be multiple equilibria in a game, and there often are. The other is that an equilibrium doesn’t automatically mean that the players have maximized their joint payoffs. Nash equilibria can be inefficient. If the players were able to cooperate, surely they would both prefer to switch to strategy A. But when either player acts on their own, they cannot get there from the P1:B-P2:B equilibrium.

The strategies of the players in this example are called pure strategies. They play either only A or only B, there’s no mixing going on. Equilibria are not guaranteed to exist if players are only allowed to play pure strategies. However, if we allow players to mix between the available strategies, then equilibria are guaranteed to exist. This was proven by Neumann and Morgenstern in the 1940’s, and extended by Nash in 1951 to apply to a much wider set of games. For a simple game with no pure equilibria, but where mixed strategy equilibria exist, you can look up the matching pennies game.

**Solution Concept**

The term Game Theory Optimal (GTO) is specific to poker as far as I know, and originated in the book Mathematics of Poker by Chen and Ankenman. They write that a strategy pair is optimal if neither player can improve their expectation by unilaterally changing their strategy. This is the same definition as for the Nash equilibrium above, but calling is optimal is a bit confusing. In most contexts, when we talk about something being optimal, we generally mean the best or most favorable possible outcome. As we see above, an equilibrium strategy pair does not have to mean the maximum payoffs at all. It only means that neither player can improve on their own.

A solution concept in game theory is a formal rule for predicting how the game will be played. It needs to specify actions for all possible game states for all players, even for game states that end up never being played. For poker, this means that a solution would include all the possible actions a player could take, with all possible starting hands, over all possible board runouts, and also (for big bet games) for all possible bet sizes. And then it would include all the possible responses of all other players, and the counter responses of the first player, and so on. The metric for all this is expected value, and it is driven by two things: the frequency at which we have certain hands, and the equities of those respective hands.

However, a solution concept does NOT dictate that any player needs to take a certain action with a certain frequency. There is some confusion around this in the poker community, specifically around the concept of minimum defense frequency. The idea is that if our opponent gets into a spot where they can profitably bet any hand, then that must be bad for ourselves, so we should play in such a way that the opponent doesn’t get into auto profit spots like that. But there’s really nothing that guarantees either player to prevent the other from auto profiting in some spots. On the contrary, thanks to the asymmetric nature of a single hand of poker, it is quite likely that there will frequently be spots where one of the players gets to bet profitably with all their hands, and stubbornly trying to prevent that is going to lead to lower expectation. There are some spots where the minimum defense concept does apply, and it can be useful for approximations sometimes. It’s just not necessarily intuitive when it applies and when it doesn’t, and the broken strategies that minimum defense frequency concepts come up with can be quite costly.

So when we talk about solutions for game situations in poker, it means that we should give actions for all players for all possible hands, even for branches of the game tree that are not reached. Stated another way, we need to state the ranges and mixed strategy frequencies for all actions and all players. So in a small game tree with 20 nodes, there should be 20 different ranges defined. In the first example by spassewr, the game tree has around 40 nodes, and this is still a very small game tree. If you take a look at the solution from GTO Range Builder, you’ll find not only complete ranges for all nodes, but also the mixed strategy frequencies that each individual hand is played at. We’ll also quickly understand that all practical solutions that we share in the group will necessarily be approximations. This begs the question of how we measure the error – how good is an approximate solution? There’s a way to do that without even knowing the equilibrium solution, and I’ll go over how to do that in another article.

**Other Solution Concepts**

In the poker community, GTO or the Nash equilibrium is considered the golden standard. But we see above that it doesn’t actually have all the desirable properties that the poker community wants to attribute to it.

This is not specific to just poker. In order to deal with this, the general game theory community has come up with some refinements to the Nash equilibrium, that actually give us the kinds of solutions that we want. Solutions that we can somehow think of as “optimal” (although this is a vague concept). One of those refinements is backward induction, which can be applied to poker. Another refinement is subgame perfect Nash equilibrium. This has some weird quirks to it as applied to poker, but it can be a useful concept in some situations. There is also Bayesian equilibrium and forward induction, which may also be useful in some spots.

We’ll explore these as required, but the main takeaway here is to understand what a Nash equilibrium is, and what it is not. To understand that equilibrium does not have to imply optimal in the sense that we normally use the word, and that the poker community has adopted some fairly confusing terminology. We also find that game theory can be quite involved, and applied game theory as it applies to poker can easily get lost in millions of little details. The challenge for this group is to find a balance where we find game theory oriented solutions, that are good enough to be competitive in today’s games, without getting too lost in theory or practical application.

In future articles we’ll explore some related concepts in more detail.