Coordinated behavior of cooperative agents using deep reinforcement learning

Elhadji Amadou Oury Diallo*, Ayumi Sugiyama, Toshiharu Sugawara

*この研究の対応する著者

研究成果: Article査読

13 被引用数 (Scopus)

抄録

In this work, we focus on an environment where multiple agents with complementary capabilities cooperate to generate non-conflicting joint actions that achieve a specific target. The central problem addressed is how several agents can collectively learn to coordinate their actions such that they complete a given task together without conflicts. However, sequential decision-making under uncertainty is one of the most challenging issues for intelligent cooperative systems. To address this, we propose a multi-agent concurrent framework where agents learn coordinated behaviors in order to divide their areas of responsibility. The proposed framework is an extension of some recent deep reinforcement learning algorithms such as DQN, double DQN, and dueling network architectures. Then, we investigate how the learned behaviors change according to the dynamics of the environment, reward scheme, and network structures. Next, we show how agents behave and choose their actions such that the resulting joint actions are optimal. We finally show that our method can lead to stable solutions in our specific environment.

本文言語English
ページ(範囲)230-240
ページ数11
ジャーナルNeurocomputing
396
DOI
出版ステータスPublished - 2020 7月 5

ASJC Scopus subject areas

  • コンピュータ サイエンスの応用
  • 認知神経科学
  • 人工知能

フィンガープリント

「Coordinated behavior of cooperative agents using deep reinforcement learning」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル