TY - GEN
T1 - Norm emergence via influential weight propagation in complex networks
AU - Shibusawa, Ryosuke
AU - Sugawara, Toshiharu
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2014/12/12
Y1 - 2014/12/12
N2 - We propose an influence-based aggregative learning framework that facilitates the emergence of social norms in complex networks and investigate how a norm converges by learning through iterated local interactions in a coordination game. In society, humans decide to coordinate their behavior not only by exchanging information but also on the basis of norms that are often individually derived from interactions without a centralized authority. Coordination using norms has received much attention in studies of multi-agent systems. In addition, because agents often work as delegates of humans, they should have 'mental' models about how to interact with others and incorporate differences of opinion. Because norms make sense only when all or most agents have the same one and they can expect that others will follow, it is important to investigate the mechanism of norm emergence through learning with local and individual interactions in agent society. Our method of norm learning borrows from the opinion aggregation process while taking into account the influence of local opinions in tightly coordinated human communities. We conducted experiments showing how our learning framework facilitates propagation of norms in a number of complex agent networks.
AB - We propose an influence-based aggregative learning framework that facilitates the emergence of social norms in complex networks and investigate how a norm converges by learning through iterated local interactions in a coordination game. In society, humans decide to coordinate their behavior not only by exchanging information but also on the basis of norms that are often individually derived from interactions without a centralized authority. Coordination using norms has received much attention in studies of multi-agent systems. In addition, because agents often work as delegates of humans, they should have 'mental' models about how to interact with others and incorporate differences of opinion. Because norms make sense only when all or most agents have the same one and they can expect that others will follow, it is important to investigate the mechanism of norm emergence through learning with local and individual interactions in agent society. Our method of norm learning borrows from the opinion aggregation process while taking into account the influence of local opinions in tightly coordinated human communities. We conducted experiments showing how our learning framework facilitates propagation of norms in a number of complex agent networks.
KW - Complex Network
KW - Influence
KW - Multi Agent System
KW - Norm
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=84921019900&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84921019900&partnerID=8YFLogxK
U2 - 10.1109/ENIC.2014.28
DO - 10.1109/ENIC.2014.28
M3 - Conference contribution
AN - SCOPUS:84921019900
T3 - Proceedings - 2014 European Network Intelligence Conference, ENIC 2014
SP - 30
EP - 37
BT - Proceedings - 2014 European Network Intelligence Conference, ENIC 2014
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 1st European Network Intelligence Conference, ENIC 2014
Y2 - 29 September 2014 through 30 September 2014
ER -