TY - JOUR
T1 - Multi-stream end-To-end speech recognition
AU - Li, Ruizhi
AU - Wang, Xiaofei
AU - Mallidi, Sri Harish
AU - Watanabe, Shinji
AU - Hori, Takaaki
AU - Hermansky, Hynek
N1 - Funding Information:
Manuscript received June 17, 2019; revised October 18, 2019; accepted November 26, 2019. Date of publication December 13, 2019; date of current version January 21, 2020. This work was supported by National Science Foundation under Grants 1704170 and 1743616. The work of H. Hermansky was supported by Google Faculty Award. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Xiaodong Cui. (Corresponding author: Ruizhi Li.) R. Li, X. Wang, S. Watanabe, and H. Hermansky are with the Johns Hopkins University (JHU), Baltimore, MD 21219 USA (e-mail: ruizhili@jhu.edu; xiaofeiwang@jhu.edu; shinjiw@ieee.org; hynek@jhu.edu).
Publisher Copyright:
© 2014 IEEE.
PY - 2020
Y1 - 2020
N2 - Attention-based methods and Connectionist Temporal Classification (CTC) network have been promising research directions for end-To-end (E2E) Automatic Speech Recognition (ASR). The joint CTC/Attention model has achieved great success by utilizing both architectures during multi-Task training and joint decoding. In this article, we present a multi-stream framework based on joint CTC/Attention E2E ASR with parallel streams represented by separate encoders aiming to capture diverse information. On top of the regular attention networks, the Hierarchical Attention Network (HAN) is introduced to steer the decoder toward the most informative encoders. A separate CTC network is assigned to each stream to force monotonic alignments. Two representative framework have been proposed and discussed, which are Multi-Encoder Multi-Resolution (MEM-Res) framework and Multi-Encoder Multi-Array (MEM-Array) framework, respectively. In MEM-Res framework, two heterogeneous encoders with different architectures, temporal resolutions and separate CTC networks work in parallel to extract complementary information from same acoustics. Experiments are conducted on Wall Street Journal (WSJ) and CHiME-4, resulting in relative Word Error Rate (WER) reduction of \text{18.0}\!-\!\text{32.1}\% and the best WER of \text{3.6}\% in the WSJ eval92 test set. The MEM-Array framework aims at improving the far-field ASR robustness using multiple microphone arrays which are activated by separate encoders. Compared with the best single-Array results, the proposed framework has achieved relative WER reduction of \text{3.7}\% and \text{9.7}\% in AMI and DIRHA multi-Array corpora, respectively, which also outperforms conventional fusion strategies.
AB - Attention-based methods and Connectionist Temporal Classification (CTC) network have been promising research directions for end-To-end (E2E) Automatic Speech Recognition (ASR). The joint CTC/Attention model has achieved great success by utilizing both architectures during multi-Task training and joint decoding. In this article, we present a multi-stream framework based on joint CTC/Attention E2E ASR with parallel streams represented by separate encoders aiming to capture diverse information. On top of the regular attention networks, the Hierarchical Attention Network (HAN) is introduced to steer the decoder toward the most informative encoders. A separate CTC network is assigned to each stream to force monotonic alignments. Two representative framework have been proposed and discussed, which are Multi-Encoder Multi-Resolution (MEM-Res) framework and Multi-Encoder Multi-Array (MEM-Array) framework, respectively. In MEM-Res framework, two heterogeneous encoders with different architectures, temporal resolutions and separate CTC networks work in parallel to extract complementary information from same acoustics. Experiments are conducted on Wall Street Journal (WSJ) and CHiME-4, resulting in relative Word Error Rate (WER) reduction of \text{18.0}\!-\!\text{32.1}\% and the best WER of \text{3.6}\% in the WSJ eval92 test set. The MEM-Array framework aims at improving the far-field ASR robustness using multiple microphone arrays which are activated by separate encoders. Compared with the best single-Array results, the proposed framework has achieved relative WER reduction of \text{3.7}\% and \text{9.7}\% in AMI and DIRHA multi-Array corpora, respectively, which also outperforms conventional fusion strategies.
KW - End-To-end speech recognition
KW - connectionist temporal classification
KW - encoder-decoder
KW - hierarchical attention network (HAN)
KW - joint CTC/attention
KW - multi-encoder multi-Array (MEM-Array)
KW - multi-encoder multi-resolution (MEM-Res)
UR - http://www.scopus.com/inward/record.url?scp=85078799947&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85078799947&partnerID=8YFLogxK
U2 - 10.1109/TASLP.2019.2959721
DO - 10.1109/TASLP.2019.2959721
M3 - Article
AN - SCOPUS:85078799947
SN - 2329-9290
VL - 28
SP - 646
EP - 655
JO - IEEE/ACM Transactions on Audio Speech and Language Processing
JF - IEEE/ACM Transactions on Audio Speech and Language Processing
M1 - 8932598
ER -