Abstract
We investigate a model of learning the iterated prisoner's dilemma game. Players have the choice between three strategies: always defect (ALLD), always cooperate (ALLC) and tit-for-tat (TFT). The only strict Nash equilibrium in this situation is ALLD. When players learn to play this game convergence to the equilibrium is not guaranteed, for example we find cooperative behaviour if players discount observations in the distant past. When agents use small samples of observed moves to estimate their opponent's strategy the learning process is stochastic, and sustained oscillations between cooperation and defection can emerge. These cycles are similar to those found in stochastic evolutionary processes, but the origin of the noise sustaining the oscillations is different and lies in the imperfect sampling of the opponent's strategy. Based on a systematic expansion technique, we are able to predict the properties of these learning cycles, providing an analytical tool with which the outcome of more general stochastic adaptation processes can be characterised. © 2011 IOP Publishing Ltd and SISSA.
Original language | English |
---|---|
Article number | P08007 |
Journal | Journal of Statistical Mechanics: Theory and Experiment |
Volume | 2011 |
Issue number | 8 |
DOIs | |
Publication status | Published - Aug 2011 |
Keywords
- applications to game theory and mathematical economics
- game-theory (theory)
- stochastic processes