TY - GEN
T1 - Attention-Based Multi-Hypothesis Fusion for Speech Summarization
AU - Kano, Takatomo
AU - Ogawa, Atsunori
AU - Delcroix, Marc
AU - Watanabe, Shinji
N1 - Funding Information:
We would like to thank Jiatong Shi at Johns Hopkins University for providing a script of ROVER-based system combinations.
Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - Speech summarization, which generates a text summary from speech, can be achieved by combining automatic speech recognition (ASR) and text summarization (TS). With this cascade approach, we can exploit state-of-the-art models and large training datasets for both subtasks, i.e., Transformer for ASR and Bidirectional Encoder Representations from Transformers (BERT) for TS. However, ASR errors directly affect the quality of the output summary in the cascade approach. We propose a cascade speech summarization model that is robust to ASR errors and that exploits multiple hypotheses generated by ASR to attenuate the effect of ASR errors on the summary. We investigate several schemes to combine ASR hypotheses. First, we propose using the sum of sub-word embedding vectors weighted by their posterior values provided by an ASR system as an input to a BERT-based TS system. Then, we introduce a more general scheme that uses an attention-based fusion module added to a pre-trained BERT module to align and combine several ASR hypotheses. Finally, we perform speech summarization experiments on the How2 dataset and a newly assembled TED-based dataset that we will release with this paper11https://github.com/nttcslab-sp-admin/TEDSummary. These experiments show that retraining the BERT-based TS system with these schemes can improve summarization performance and that the attention-based fusion module is particularly effective.
AB - Speech summarization, which generates a text summary from speech, can be achieved by combining automatic speech recognition (ASR) and text summarization (TS). With this cascade approach, we can exploit state-of-the-art models and large training datasets for both subtasks, i.e., Transformer for ASR and Bidirectional Encoder Representations from Transformers (BERT) for TS. However, ASR errors directly affect the quality of the output summary in the cascade approach. We propose a cascade speech summarization model that is robust to ASR errors and that exploits multiple hypotheses generated by ASR to attenuate the effect of ASR errors on the summary. We investigate several schemes to combine ASR hypotheses. First, we propose using the sum of sub-word embedding vectors weighted by their posterior values provided by an ASR system as an input to a BERT-based TS system. Then, we introduce a more general scheme that uses an attention-based fusion module added to a pre-trained BERT module to align and combine several ASR hypotheses. Finally, we perform speech summarization experiments on the How2 dataset and a newly assembled TED-based dataset that we will release with this paper11https://github.com/nttcslab-sp-admin/TEDSummary. These experiments show that retraining the BERT-based TS system with these schemes can improve summarization performance and that the attention-based fusion module is particularly effective.
KW - Attention-based Fusion
KW - Automatic Speech Recognition
KW - BERT
KW - Speech Summarization
UR - http://www.scopus.com/inward/record.url?scp=85126783609&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85126783609&partnerID=8YFLogxK
U2 - 10.1109/ASRU51503.2021.9687977
DO - 10.1109/ASRU51503.2021.9687977
M3 - Conference contribution
AN - SCOPUS:85126783609
T3 - 2021 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2021 - Proceedings
SP - 487
EP - 494
BT - 2021 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2021 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2021
Y2 - 13 December 2021 through 17 December 2021
ER -