Cycles of cooperation and defection in imperfect learning

Tobias Galla

    Research output: Contribution to journalArticlepeer-review

    Abstract

    We investigate a model of learning the iterated prisoner's dilemma game. Players have the choice between three strategies: always defect (ALLD), always cooperate (ALLC) and tit-for-tat (TFT). The only strict Nash equilibrium in this situation is ALLD. When players learn to play this game convergence to the equilibrium is not guaranteed, for example we find cooperative behaviour if players discount observations in the distant past. When agents use small samples of observed moves to estimate their opponent's strategy the learning process is stochastic, and sustained oscillations between cooperation and defection can emerge. These cycles are similar to those found in stochastic evolutionary processes, but the origin of the noise sustaining the oscillations is different and lies in the imperfect sampling of the opponent's strategy. Based on a systematic expansion technique, we are able to predict the properties of these learning cycles, providing an analytical tool with which the outcome of more general stochastic adaptation processes can be characterised. © 2011 IOP Publishing Ltd and SISSA.
    Original languageEnglish
    Article numberP08007
    JournalJournal of Statistical Mechanics: Theory and Experiment
    Volume2011
    Issue number8
    DOIs
    Publication statusPublished - Aug 2011

    Keywords

    • applications to game theory and mathematical economics
    • game-theory (theory)
    • stochastic processes

    Fingerprint

    Dive into the research topics of 'Cycles of cooperation and defection in imperfect learning'. Together they form a unique fingerprint.

    Cite this