Abstract
Humans and other animals can adapt their social behavior in response to environmental cues including the feedback obtained through experience. Nevertheless, the effects of the experience-based learning of players in evolution and maintenance of cooperation in social dilemma games remain relatively unclear. Some previous literature showed that mutual cooperation of learning players is difficult or requires a sophisticated learning model. In the context of the iterated Prisoner's dilemma, we numerically examine the performance of a reinforcement learning model. Our model modifies those of Karandikar et al. (1998), Posch et al. (1999), and Macy and Flache (2002) in which players satisfice if the obtained payoff is larger than a dynamic threshold. We show that players obeying the modified learning mutually cooperate with high probability if the dynamics of threshold is not too fast and the association between the reinforcement signal and the action in the next round is sufficiently strong. The learning players also perform efficiently against the reactive strategy. In evolutionary dynamics, they can invade a population of players adopting simpler but competitive strategies. Our version of the reinforcement learning model does not complicate the previous model and is sufficiently simple yet flexible. It may serve to explore the relationships between learning and evolution in social dilemma situations.
Original language | English |
---|---|
Pages (from-to) | 55-62 |
Number of pages | 8 |
Journal | Journal of Theoretical Biology |
Volume | 278 |
Issue number | 1 |
DOIs | |
Publication status | Published - 2011 Jun 7 |
Externally published | Yes |
Keywords
- Cooperation
- Direct reciprocity
- Prisoner's dilemma
- Reinforcement learning
ASJC Scopus subject areas
- Statistics and Probability
- Modelling and Simulation
- Biochemistry, Genetics and Molecular Biology(all)
- Immunology and Microbiology(all)
- Agricultural and Biological Sciences(all)
- Applied Mathematics