TY - GEN
T1 - Reorganization of agent networks with reinforcement learning based on communication delay
AU - Urakawa, Kazuki
AU - Sugawara, Toshiharu
PY - 2012
Y1 - 2012
N2 - We propose the team formation method for task allocations in agent networks by reinforcement learning based on communication delay and by reorganization of agent networks. A task in a distributed environment like an Internet application, such as grid computing and service-oriented computing, is usually achieved by doing a number of subtasks. These subtasks are constructed on demand in a bottom-up manner and must be done with appropriate agents that have capabilities and computational resources required in each subtask. Therefore, the efficient and effective allocation of tasks to appropriate agents is a key issue in this kind of system. In our model, this allocation problem is formulated as the team formation of agents in the task-oriented domain. From this perspective, a number of studies were conducted in which learning and reorganization were incorporated. The aim of this paper is to extend the conventional method from two viewpoints. First, our proposed method uses only information available locally for learning, so as to make this method applicable to real systems. Second, we introduce the elimination of links as well as the generation of links in the agent network to improve learning efficiency. We experimentally show that this extension can considerably improve the efficiency of team formation compared with the conventional method. We also show that it can make the agent network adaptive to environmental changes.
AB - We propose the team formation method for task allocations in agent networks by reinforcement learning based on communication delay and by reorganization of agent networks. A task in a distributed environment like an Internet application, such as grid computing and service-oriented computing, is usually achieved by doing a number of subtasks. These subtasks are constructed on demand in a bottom-up manner and must be done with appropriate agents that have capabilities and computational resources required in each subtask. Therefore, the efficient and effective allocation of tasks to appropriate agents is a key issue in this kind of system. In our model, this allocation problem is formulated as the team formation of agents in the task-oriented domain. From this perspective, a number of studies were conducted in which learning and reorganization were incorporated. The aim of this paper is to extend the conventional method from two viewpoints. First, our proposed method uses only information available locally for learning, so as to make this method applicable to real systems. Second, we introduce the elimination of links as well as the generation of links in the agent network to improve learning efficiency. We experimentally show that this extension can considerably improve the efficiency of team formation compared with the conventional method. We also show that it can make the agent network adaptive to environmental changes.
KW - Distributed cooperative
KW - Multi-agent reinforcement learning
KW - Reorganization
KW - Team formation
UR - http://www.scopus.com/inward/record.url?scp=84878465215&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84878465215&partnerID=8YFLogxK
U2 - 10.1109/WI-IAT.2012.105
DO - 10.1109/WI-IAT.2012.105
M3 - Conference contribution
AN - SCOPUS:84878465215
SN - 9780769548807
T3 - Proceedings - 2012 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2012
SP - 324
EP - 331
BT - Proceedings - 2012 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2012
T2 - 2012 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2012
Y2 - 4 December 2012 through 7 December 2012
ER -