TY - JOUR
T1 - COMBINATIONS OF MICRO-MACRO STATES AND SUBGOALS DISCOVERY IN HIERARCHICAL REINFORCEMENT LEARNING FOR PATH FINDING
AU - Setyawan, Gembong Edhi
AU - Sawada, Hideyuki
AU - Hartono, Pitoyo
N1 - Publisher Copyright:
© 2022 ICIC International.
PY - 2022/4
Y1 - 2022/4
N2 - While Reinforcement Learning (RL) is one of the strongest unsupervised learning algorithms, it often faces difficulties dealing with complex environments. These difficulties correlate with the curse of dimensionality in which an excessively large number of states causes the process of RL prohibitively difficult. Hierarchical Reinforcement Learning (HRL) is proposed to overcome the weaknesses of RL by hierarchically decomposing a complex problem into more manageable sub-problems. This paper proposes Micro-Macro States Combination (MMSC) as a new approach for HRL by formulating the task into two layers. The lower layer depicts the task in their microstates, which represent the original states, while the upper layer depicts macrostates, some collections of a number of the microstates. The macrostates can be considered the higher abstractions of the original states that allow the RL to perceive the problem differently. Here, the proposed MMSC is allowed to operate not only on the microstates but also on their higher-level abstractions, and thus enabling the RL to flexibly change its perspective during the problem solving, each time choosing a perspective that leads it to the solution faster. In this paper, the algorithm for the Micro-Macro States combination is formulated and tested on path-finding problems in grid worlds. Here, the novelty of the proposed algorithm in hierarchically decomposing the given problems and in automatic goal-reaching in the sub-problem is tested against traditional RL and other hierarchical RL, and quantitatively analyzed.
AB - While Reinforcement Learning (RL) is one of the strongest unsupervised learning algorithms, it often faces difficulties dealing with complex environments. These difficulties correlate with the curse of dimensionality in which an excessively large number of states causes the process of RL prohibitively difficult. Hierarchical Reinforcement Learning (HRL) is proposed to overcome the weaknesses of RL by hierarchically decomposing a complex problem into more manageable sub-problems. This paper proposes Micro-Macro States Combination (MMSC) as a new approach for HRL by formulating the task into two layers. The lower layer depicts the task in their microstates, which represent the original states, while the upper layer depicts macrostates, some collections of a number of the microstates. The macrostates can be considered the higher abstractions of the original states that allow the RL to perceive the problem differently. Here, the proposed MMSC is allowed to operate not only on the microstates but also on their higher-level abstractions, and thus enabling the RL to flexibly change its perspective during the problem solving, each time choosing a perspective that leads it to the solution faster. In this paper, the algorithm for the Micro-Macro States combination is formulated and tested on path-finding problems in grid worlds. Here, the novelty of the proposed algorithm in hierarchically decomposing the given problems and in automatic goal-reaching in the sub-problem is tested against traditional RL and other hierarchical RL, and quantitatively analyzed.
KW - Hierarchical abstraction
KW - Hierarchical reinforcement learning
KW - Reinforcement learning
KW - Task decom-position
UR - http://www.scopus.com/inward/record.url?scp=85125499807&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85125499807&partnerID=8YFLogxK
U2 - 10.24507/ijicic.18.02.447
DO - 10.24507/ijicic.18.02.447
M3 - Article
AN - SCOPUS:85125499807
SN - 1349-4198
VL - 18
SP - 447
EP - 462
JO - International Journal of Innovative Computing, Information and Control
JF - International Journal of Innovative Computing, Information and Control
IS - 2
ER -