Broadly speaking, a set of players following specific strategies are in Nash equilibrium if no one participant can guarantee an advantage by changing strategy. Whilst the outcome of an equilibrium strategy may not be the best conceivable for any given player, to follow the strategy is their best plan in the absence of knowledge of their opponents' plays. The Nash Equilibrium is one of the most fundamental results in non-cooperative game theory, providing a significant generalisation to John Von Neumann's earlier notion of value in zero-sum games, a part of his Minimax theorem for such games.

Adopting the notation of strategic form games, we may make this notion precise as follows. For an n-player general sum strategic form game X1,...,Xn, each strategy space Xi, consisting of strategies {x1, . . . , xmi} will give rise to a space of mixed strategies SXi. These consist of mi-tuples (pi1, . . . , pimi) which describe the probability of player i picking strategy xj ∈ Xi as pij . For these probabilities to be valid, we additionally require that for each i the sum of the pij's is 1, with each being in the range 0 to 1.

Further, we can simplify notation by identifying the strategy sets Xi with the integers {1, . . . ,mi}- that is, playing '1' corresponds to a strategy of x1 and so on.

Thus the expected return to player j when each player has mixed strategy pj is given by

Ej(p1,...,pn)= Σi1=1:m1...Σin=1:mnp1i1...pninaj(i1,...,in)

Where each ai is the appropriate payoff function.

Nash's Theorem

With the above notation,
Definition: A set of mixed strategies (p1, . . . , pn) ∈ SX1 × ... × SXn is an n-player Nash equilibrium for the game given by strategic form X1, . . . ,Xn if no player gains by unilaterally deviating from the equilibrium.
More formally, if ∀ i from 1 to n and for any p'∈ SXi,

Ej(p1,...,pi-1,pi,pi+1,...,pn) ≥ Ej(p1,...,pi-1,p',pi+1,...,pn)

Then (p1, . . . , pn) is a Nash equilibrium (since no player j benefits from changing from strategy pj to any other strategy p').

Existence of a unique Nash equilibrium in any two player zero-sum game follows from the Minimax theorem; this is demonstrated directly for the 2×2 case in the saddle point writeup and follows by dominated strategies or linear programming techniques for games with larger strategy sets.

The Prisoner's dilemma gives rise to a Nash equilibrium (mutual defection, see dominated strategy for discussion), as does the iterated version, indicating that the zero-sum condition is not necessary. In his 1950 dissertation, John Nash proved that much more was true:

(Nash's theorem) Any finite n-player game in strategic form has a Nash equilibrium.

It is worth noting what this theorem does not say alongside what it does; there is no claim of uniqueness, nor can there be, as the Stag Hunt demonstrates.

The proof of this result depends on some topology and delicate notation, which I won't attempt in raw HTML (the definitions earlier in this writeup are ugly enough!). The interested reader can find a proof in section 4.2.1 of the print version of this work described on the homenode.


Part of A survey of game theory- see project homenode for details and links to the print version.