Stable Deep Reinforcement Learning Method by Predicting Uncertainty in Rewards as a Subtask

Kanata Suzuki, Tetsuya Ogata*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

In recent years, a variety of tasks have been accomplished by deep reinforcement learning (DRL). However, when applying DRL to tasks in a real-world environment, designing an appropriate reward is difficult. Rewards obtained via actual hardware sensors may include noise, misinterpretation, or failed observations. The learning instability caused by these unstable signals is a problem that remains to be solved in DRL. In this work, we propose an approach that extends existing DRL models by adding a subtask to directly estimate the variance contained in the reward signal. The model then takes the feature map learned by the subtask in a critic network and sends it to the actor network. This enables stable learning that is robust to the effects of potential noise. The results of experiments in the Atari game domain with unstable reward signals show that our method stabilizes training convergence. We also discuss the extensibility of the model by visualizing feature maps. This approach has the potential to make DRL more practical for use in noisy, real-world scenarios.

Original languageEnglish
Title of host publicationNeural Information Processing - 27th International Conference, ICONIP 2020, Proceedings
EditorsHaiqin Yang, Kitsuchart Pasupa, Andrew Chi-Sing Leung, James T. Kwok, Jonathan H. Chan, Irwin King
PublisherSpringer Science and Business Media Deutschland GmbH
Pages651-662
Number of pages12
ISBN (Print)9783030638320
DOIs
Publication statusPublished - 2020
Event27th International Conference on Neural Information Processing, ICONIP 2020 - Bangkok, Thailand
Duration: 2020 Nov 182020 Nov 22

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12533 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference27th International Conference on Neural Information Processing, ICONIP 2020
Country/TerritoryThailand
CityBangkok
Period20/11/1820/11/22

Keywords

  • Deep reinforcement learning
  • Uncertainty
  • Variance branch

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint

Dive into the research topics of 'Stable Deep Reinforcement Learning Method by Predicting Uncertainty in Rewards as a Subtask'. Together they form a unique fingerprint.

Cite this