TY - GEN
T1 - JOINT SPEECH RECOGNITION AND AUDIO CAPTIONING
AU - Narisetty, Chaitanya
AU - Tsunoo, Emiru
AU - Chang, Xuankai
AU - Kashiwagi, Yosuke
AU - Hentschel, Michael
AU - Watanabe, Shinji
N1 - Funding Information:
This work was supported in part by Sony Group Corporation and used the Extreme Science and Engineering Discovery Environment (XSEDE) [27], which is supported by National Science Foundation grant number ACI-1548562. Specifically, it used the Bridges system [28], which is supported by NSF award number ACI-1445606, at the Pittsburgh Supercomputing Center (PSC).
Publisher Copyright:
© 2022 IEEE
PY - 2022
Y1 - 2022
N2 - Speech samples recorded in both indoor and outdoor environments are often contaminated with secondary audio sources. Most end-to-end monaural speech recognition systems either remove these background sounds using speech enhancement or train noise-robust models. For better model interpretability and holistic understanding, we aim to bring together the growing field of automated audio captioning (AAC) and the thoroughly studied automatic speech recognition (ASR). The goal of AAC is to generate natural language descriptions of contents in audio samples. We propose several approaches for end-to-end joint modeling of ASR and AAC tasks and demonstrate their advantages over traditional approaches, which model these tasks independently. A major hurdle in evaluating our proposed approach is the lack of labeled audio datasets with both speech transcriptions and audio captions. Therefore we also create a multi-task dataset by mixing the clean speech Wall Street Journal corpus with multiple levels of background noises chosen from the AudioCaps dataset. We also perform extensive experimental evaluation and show improvements of our proposed methods as compared to existing state-of-the-art ASR and AAC methods.
AB - Speech samples recorded in both indoor and outdoor environments are often contaminated with secondary audio sources. Most end-to-end monaural speech recognition systems either remove these background sounds using speech enhancement or train noise-robust models. For better model interpretability and holistic understanding, we aim to bring together the growing field of automated audio captioning (AAC) and the thoroughly studied automatic speech recognition (ASR). The goal of AAC is to generate natural language descriptions of contents in audio samples. We propose several approaches for end-to-end joint modeling of ASR and AAC tasks and demonstrate their advantages over traditional approaches, which model these tasks independently. A major hurdle in evaluating our proposed approach is the lack of labeled audio datasets with both speech transcriptions and audio captions. Therefore we also create a multi-task dataset by mixing the clean speech Wall Street Journal corpus with multiple levels of background noises chosen from the AudioCaps dataset. We also perform extensive experimental evaluation and show improvements of our proposed methods as compared to existing state-of-the-art ASR and AAC methods.
KW - AAC
KW - ASR
KW - audio captioning
KW - joint modeling
KW - speech recognition
UR - http://www.scopus.com/inward/record.url?scp=85131231282&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85131231282&partnerID=8YFLogxK
U2 - 10.1109/ICASSP43922.2022.9746601
DO - 10.1109/ICASSP43922.2022.9746601
M3 - Conference contribution
AN - SCOPUS:85131231282
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 7892
EP - 7896
BT - 2022 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022
Y2 - 23 May 2022 through 27 May 2022
ER -