TY - JOUR
T1 - EAT
T2 - 2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021
AU - Baskar, Murali Karthick
AU - Burget, Lukáš
AU - Watanabe, Shinji
AU - Astudillo, Ramon Fernandez
AU - Černocký, Jan
N1 - Funding Information:
All the authors from Brno university of Technology are supported by Czech National Science Foundation (GACR) project ”NEUREM3” No. 19-26934X and Czech Ministry of Education, Youth and Sports project No. LTAIN19087 ”Multi-linguality in speech technologies”.
Publisher Copyright:
© 2021 IEEE
PY - 2021
Y1 - 2021
N2 - Self-supervised ASR-TTS models suffer in out-of-domain data conditions. Here we propose an enhanced ASR-TTS (EAT) model that incorporates two main features: 1) The ASR→TTS direction is equipped with a language model reward to penalize the ASR hypotheses before forwarding it to TTS. 2) In the TTS→ASR direction, a hyper-parameter is introduced to scale the attention context from synthesized speech before sending it to ASR to handle out-of-domain data. Training strategies and the effectiveness of the EAT model are explored under out-of-domain data conditions. The results show that EAT reduces the performance gap between supervised and self-supervised training significantly by absolute 2.6% and 2.7% on Librispeech and BABEL respectively.
AB - Self-supervised ASR-TTS models suffer in out-of-domain data conditions. Here we propose an enhanced ASR-TTS (EAT) model that incorporates two main features: 1) The ASR→TTS direction is equipped with a language model reward to penalize the ASR hypotheses before forwarding it to TTS. 2) In the TTS→ASR direction, a hyper-parameter is introduced to scale the attention context from synthesized speech before sending it to ASR to handle out-of-domain data. Training strategies and the effectiveness of the EAT model are explored under out-of-domain data conditions. The results show that EAT reduces the performance gap between supervised and self-supervised training significantly by absolute 2.6% and 2.7% on Librispeech and BABEL respectively.
KW - Cycle-consistency
KW - Self-supervision
KW - Sequence-to-sequence
KW - Speech recognition
UR - http://www.scopus.com/inward/record.url?scp=85112201924&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85112201924&partnerID=8YFLogxK
U2 - 10.1109/ICASSP39728.2021.9413375
DO - 10.1109/ICASSP39728.2021.9413375
M3 - Conference article
AN - SCOPUS:85112201924
SN - 0736-7791
VL - 2021-June
SP - 6753
EP - 6757
JO - Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing
JF - Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing
Y2 - 6 June 2021 through 11 June 2021
ER -