TY - GEN
T1 - Comparison of Consolidation Methods for Predictive Learning of Time Series
AU - Nakajo, Ryoichi
AU - Ogata, Tetsuya
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - In environments where various tasks are sequentially given to deep neural networks (DNNs), training methods are needed that enable DNNs to learn the given tasks continuously. A DNN is typically trained by a single dataset, and continuous learning of subsequent datasets causes the problem of catastrophic forgetting. Previous studies have reported results for consolidation learning methods in recognition tasks and reinforcement learning problems. However, those methods were validated on only a few examples of predictive learning for time series. In this study, we applied elastic weight consolidation (EWC) and pseudo-rehearsal to the predictive learning of time series and compared their learning results. Evaluating the latent space after the consolidation learning revealed that the EWC method acquires properties of the pre-training and subsequent datasets with the same distribution, and the pseudo-rehearsal method distinguishes the properties and acquires them with different distributions.
AB - In environments where various tasks are sequentially given to deep neural networks (DNNs), training methods are needed that enable DNNs to learn the given tasks continuously. A DNN is typically trained by a single dataset, and continuous learning of subsequent datasets causes the problem of catastrophic forgetting. Previous studies have reported results for consolidation learning methods in recognition tasks and reinforcement learning problems. However, those methods were validated on only a few examples of predictive learning for time series. In this study, we applied elastic weight consolidation (EWC) and pseudo-rehearsal to the predictive learning of time series and compared their learning results. Evaluating the latent space after the consolidation learning revealed that the EWC method acquires properties of the pre-training and subsequent datasets with the same distribution, and the pseudo-rehearsal method distinguishes the properties and acquires them with different distributions.
KW - Consolidation learning
KW - Predictive learning
KW - Recurrent neural network
UR - http://www.scopus.com/inward/record.url?scp=85112717016&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85112717016&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-79457-6_10
DO - 10.1007/978-3-030-79457-6_10
M3 - Conference contribution
AN - SCOPUS:85112717016
SN - 9783030794569
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 113
EP - 120
BT - Advances and Trends in Artificial Intelligence. Artificial Intelligence Practices - 34th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2021, Proceedings
A2 - Fujita, Hamido
A2 - Selamat, Ali
A2 - Lin, Jerry Chun-Wei
A2 - Ali, Moonis
PB - Springer Science and Business Media Deutschland GmbH
T2 - 34th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2021
Y2 - 26 July 2021 through 29 July 2021
ER -