TY - JOUR
T1 - Numerical analysis of a reinforcement learning model with the dynamic aspiration level in the iterated Prisoner's dilemma
AU - Masuda, Naoki
AU - Nakamura, Mitsuhiro
N1 - Funding Information:
We thank Shoma Tanabe for careful reading of the paper. N.M. acknowledges the support from the Grants-in-Aid for Scientific Research (No. 20760258). M.N. acknowledges the support and the Grants-in-Aid for Scientific Research from JSPS, Japan.
PY - 2011/6/7
Y1 - 2011/6/7
N2 - Humans and other animals can adapt their social behavior in response to environmental cues including the feedback obtained through experience. Nevertheless, the effects of the experience-based learning of players in evolution and maintenance of cooperation in social dilemma games remain relatively unclear. Some previous literature showed that mutual cooperation of learning players is difficult or requires a sophisticated learning model. In the context of the iterated Prisoner's dilemma, we numerically examine the performance of a reinforcement learning model. Our model modifies those of Karandikar et al. (1998), Posch et al. (1999), and Macy and Flache (2002) in which players satisfice if the obtained payoff is larger than a dynamic threshold. We show that players obeying the modified learning mutually cooperate with high probability if the dynamics of threshold is not too fast and the association between the reinforcement signal and the action in the next round is sufficiently strong. The learning players also perform efficiently against the reactive strategy. In evolutionary dynamics, they can invade a population of players adopting simpler but competitive strategies. Our version of the reinforcement learning model does not complicate the previous model and is sufficiently simple yet flexible. It may serve to explore the relationships between learning and evolution in social dilemma situations.
AB - Humans and other animals can adapt their social behavior in response to environmental cues including the feedback obtained through experience. Nevertheless, the effects of the experience-based learning of players in evolution and maintenance of cooperation in social dilemma games remain relatively unclear. Some previous literature showed that mutual cooperation of learning players is difficult or requires a sophisticated learning model. In the context of the iterated Prisoner's dilemma, we numerically examine the performance of a reinforcement learning model. Our model modifies those of Karandikar et al. (1998), Posch et al. (1999), and Macy and Flache (2002) in which players satisfice if the obtained payoff is larger than a dynamic threshold. We show that players obeying the modified learning mutually cooperate with high probability if the dynamics of threshold is not too fast and the association between the reinforcement signal and the action in the next round is sufficiently strong. The learning players also perform efficiently against the reactive strategy. In evolutionary dynamics, they can invade a population of players adopting simpler but competitive strategies. Our version of the reinforcement learning model does not complicate the previous model and is sufficiently simple yet flexible. It may serve to explore the relationships between learning and evolution in social dilemma situations.
KW - Cooperation
KW - Direct reciprocity
KW - Prisoner's dilemma
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=79952856548&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=79952856548&partnerID=8YFLogxK
U2 - 10.1016/j.jtbi.2011.03.005
DO - 10.1016/j.jtbi.2011.03.005
M3 - Article
C2 - 21397610
AN - SCOPUS:79952856548
SN - 0022-5193
VL - 278
SP - 55
EP - 62
JO - Journal of Theoretical Biology
JF - Journal of Theoretical Biology
IS - 1
ER -