TY - GEN
T1 - A Comparative Study on Transformer vs RNN in Speech Applications
AU - Karita, Shigeki
AU - Wang, Xiaofei
AU - Watanabe, Shinji
AU - Yoshimura, Takenori
AU - Zhang, Wangyou
AU - Chen, Nanxin
AU - Hayashi, Tomoki
AU - Hori, Takaaki
AU - Inaguma, Hirofumi
AU - Jiang, Ziyan
AU - Someki, Masao
AU - Soplin, Nelson Enrique Yalta
AU - Yamamoto, Ryuichi
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/12
Y1 - 2019/12
N2 - Sequence-To-sequence models have been widely used in end-To-end speech processing, for example, automatic speech recognition (ASR), speech translation (ST), and text-To-speech (TTS). This paper focuses on an emergent sequence-To-sequence model called Transformer, which achieves state-of-The-Art performance in neural machine translation and other natural language processing applications. We undertook intensive studies in which we experimentally compared and analyzed Transformer and conventional recurrent neural networks (RNN) in a total of 15 ASR, one multilingual ASR, one ST, and two TTS benchmarks. Our experiments revealed various training tips and significant performance benefits obtained with Transformer for each task including the surprising superiority of Transformer in 13/15 ASR benchmarks in comparison with RNN. We are preparing to release Kaldi-style reproducible recipes using open source and publicly available datasets for all the ASR, ST, and TTS tasks for the community to succeed our exciting outcomes.
AB - Sequence-To-sequence models have been widely used in end-To-end speech processing, for example, automatic speech recognition (ASR), speech translation (ST), and text-To-speech (TTS). This paper focuses on an emergent sequence-To-sequence model called Transformer, which achieves state-of-The-Art performance in neural machine translation and other natural language processing applications. We undertook intensive studies in which we experimentally compared and analyzed Transformer and conventional recurrent neural networks (RNN) in a total of 15 ASR, one multilingual ASR, one ST, and two TTS benchmarks. Our experiments revealed various training tips and significant performance benefits obtained with Transformer for each task including the surprising superiority of Transformer in 13/15 ASR benchmarks in comparison with RNN. We are preparing to release Kaldi-style reproducible recipes using open source and publicly available datasets for all the ASR, ST, and TTS tasks for the community to succeed our exciting outcomes.
KW - Recurrent Neural Networks
KW - Speech Recognition
KW - Speech Translation
KW - Text-To-Speech
KW - Transformer
UR - http://www.scopus.com/inward/record.url?scp=85081603635&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85081603635&partnerID=8YFLogxK
U2 - 10.1109/ASRU46091.2019.9003750
DO - 10.1109/ASRU46091.2019.9003750
M3 - Conference contribution
AN - SCOPUS:85081603635
T3 - 2019 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2019 - Proceedings
SP - 449
EP - 456
BT - 2019 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2019 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2019
Y2 - 15 December 2019 through 18 December 2019
ER -