Coordination structures generated by deep reinforcement learning in distributed task executions

Yuki Miyashita, Toshiharu Sugawara

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We investigate the coordination structures generated by deep Q-network (DQN) in a distributed task execution. Cooperation and coordination are the crucial issues in multi-agent systems, and very sophisticated design or learning is required in order to achieve effective structures or regimes of coordination. In this paper, we show the results that agents establish the division of labor in a bottom-up manner by determining their implicit responsible area when input structure for DQN is constituted by their own observation and absolute location.

Original languageEnglish
Title of host publication18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019
PublisherInternational Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Pages2129-2131
Number of pages3
ISBN (Electronic)9781510892002
Publication statusPublished - 2019
Event18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019 - Montreal, Canada
Duration: 2019 May 132019 May 17

Publication series

NameProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
Volume4
ISSN (Print)1548-8403
ISSN (Electronic)1558-2914

Conference

Conference18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019
Country/TerritoryCanada
CityMontreal
Period19/5/1319/5/17

Keywords

  • Cooperation
  • Coordination
  • Divisional cooperation
  • Multi-agent deep reinforcement learning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering

Fingerprint

Dive into the research topics of 'Coordination structures generated by deep reinforcement learning in distributed task executions'. Together they form a unique fingerprint.

Cite this