TY - GEN
T1 - Promotion of robust cooperation among agents in complex networks by enhanced expectation-of-cooperation strategy
AU - Otsuka, Tomoaki
AU - Sugawara, Toshiharu
N1 - Funding Information:
This work is partly supported by KAKENHI (17KT0044).
PY - 2018
Y1 - 2018
N2 - We present an interaction strategy with reinforcement learning to promote mutual cooperation among agents in complex networks. Networked computerized systems consisting of many agents that are delegates of social entities, such as companies and organizations, are being implemented due to advances in networking and computer technologies. Because the relationships among agents reflect the interaction structures of the corresponding social entities in the real world, social dilemma situations like the prisoner’s dilemma are often encountered. Thus, agents have to learn appropriate behaviors from the long term viewpoint to be able to function properly in the virtual society. The proposed interaction strategy is called the enhanced expectation-of-cooperation (EEoC) strategy and is an extension of our previously proposed strategy for improving robustness against defecting agents and for preventing exploitation by them. Experiments demonstrated that agents using the EEoC strategy can effectively distinguish cooperative neighboring agents from all-defecting (AllD) agents and thus can spread cooperation among EEoC agents and avoid being exploited by AllD agents. Examination of robustness against probabilistically defecting (ProbD) agents demonstrated that EEoC agents can spread and maintain mutual cooperation if the number of ProbD agents is not large. The EEoC strategy is thus simple and useful in actual computerized systems.
AB - We present an interaction strategy with reinforcement learning to promote mutual cooperation among agents in complex networks. Networked computerized systems consisting of many agents that are delegates of social entities, such as companies and organizations, are being implemented due to advances in networking and computer technologies. Because the relationships among agents reflect the interaction structures of the corresponding social entities in the real world, social dilemma situations like the prisoner’s dilemma are often encountered. Thus, agents have to learn appropriate behaviors from the long term viewpoint to be able to function properly in the virtual society. The proposed interaction strategy is called the enhanced expectation-of-cooperation (EEoC) strategy and is an extension of our previously proposed strategy for improving robustness against defecting agents and for preventing exploitation by them. Experiments demonstrated that agents using the EEoC strategy can effectively distinguish cooperative neighboring agents from all-defecting (AllD) agents and thus can spread cooperation among EEoC agents and avoid being exploited by AllD agents. Examination of robustness against probabilistically defecting (ProbD) agents demonstrated that EEoC agents can spread and maintain mutual cooperation if the number of ProbD agents is not large. The EEoC strategy is thus simple and useful in actual computerized systems.
UR - http://www.scopus.com/inward/record.url?scp=85036625916&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85036625916&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-72150-7_66
DO - 10.1007/978-3-319-72150-7_66
M3 - Conference contribution
AN - SCOPUS:85036625916
SN - 9783319721491
T3 - Studies in Computational Intelligence
SP - 815
EP - 828
BT - Complex Networks and Their Applications VI - Proceedings of Complex Networks 2017 (The 6th International Conference on Complex Networks and Their Applications)
A2 - Cherifi, Hocine
A2 - Cherifi, Chantal
A2 - Musolesi, Mirco
A2 - Karsai, Márton
PB - Springer Verlag
T2 - 6th International Conference on Complex Networks and Their Applications, Complex Networks 2017
Y2 - 29 November 2017 through 1 December 2017
ER -