TY - JOUR
T1 - End-to-end far-field speech recognition with unified dereverberation and beamforming
AU - Zhang, Wangyou
AU - Subramanian, Aswin Shanmugam
AU - Chang, Xuankai
AU - Watanabe, Shinji
AU - Qian, Yanmin
N1 - Funding Information:
This work was supported by the China NSFC project No. U1736202. Experiments have been carried out on the PI supercomputers at Shanghai Jiao Tong University. We would like to thank the NTT Communication Laboratories for the use of their DNN-WPE module4 for our implementation.
Publisher Copyright:
© 2020 ISCA
PY - 2020
Y1 - 2020
N2 - Despite successful applications of end-to-end approaches in multi-channel speech recognition, the performance still degrades severely when the speech is corrupted by reverberation. In this paper, we integrate the dereverberation module into the end-to-end multi-channel speech recognition system and explore two different frontend architectures. First, a multi-source mask-based weighted prediction error (WPE) module is incorporated in the frontend for dereverberation. Second, another novel frontend architecture is proposed, which extends the weighted power minimization distortionless response (WPD) convolutional beamformer to perform simultaneous separation and dereverberation. We derive a new formulation from the original WPD, which can handle multi-source input, and replace eigenvalue decomposition with the matrix inverse operation to make the back-propagation algorithm more stable. The above two architectures are optimized in a fully end-to-end manner, only using the speech recognition criterion. Experiments on both spatialized wsj1-2mix corpus and REVERB show that our proposed model outperformed the conventional methods in reverberant scenarios.
AB - Despite successful applications of end-to-end approaches in multi-channel speech recognition, the performance still degrades severely when the speech is corrupted by reverberation. In this paper, we integrate the dereverberation module into the end-to-end multi-channel speech recognition system and explore two different frontend architectures. First, a multi-source mask-based weighted prediction error (WPE) module is incorporated in the frontend for dereverberation. Second, another novel frontend architecture is proposed, which extends the weighted power minimization distortionless response (WPD) convolutional beamformer to perform simultaneous separation and dereverberation. We derive a new formulation from the original WPD, which can handle multi-source input, and replace eigenvalue decomposition with the matrix inverse operation to make the back-propagation algorithm more stable. The above two architectures are optimized in a fully end-to-end manner, only using the speech recognition criterion. Experiments on both spatialized wsj1-2mix corpus and REVERB show that our proposed model outperformed the conventional methods in reverberant scenarios.
KW - Dereverberation
KW - Neural beamforming
KW - Overlapped speech recognition
KW - Speech separation
KW - WPD
UR - http://www.scopus.com/inward/record.url?scp=85098131286&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85098131286&partnerID=8YFLogxK
U2 - 10.21437/Interspeech.2020-2432
DO - 10.21437/Interspeech.2020-2432
M3 - Conference article
AN - SCOPUS:85098131286
SN - 2308-457X
VL - 2020-October
SP - 324
EP - 328
JO - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
JF - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
T2 - 21st Annual Conference of the International Speech Communication Association, INTERSPEECH 2020
Y2 - 25 October 2020 through 29 October 2020
ER -