Non-Autoregressive Transformer for Speech Recognition

Nanxin Chen, Shinji Watanabe, Jesus Villalba, Piotr Zelasko, Najim Dehak

Research output: Contribution to journalArticlepeer-review

49 Citations (Scopus)

Abstract

Very deep transformers outperform conventional bi-directional long short-term memory networks for automatic speech recognition (ASR) by a significant margin. However, being autoregressive models, their computational complexity is still a prohibitive factor in their deployment into production systems. To amend this problem, we study two different non-autoregressive transformer structures for ASR: Audio-Conditional Masked Language Model (A-CMLM) and Audio-Factorized Masked Language Model (A-FMLM). When training these frameworks, the decoder input tokens are randomly replaced by special mask tokens. Then, the network is optimized to predict the masked tokens by taking both the unmasked context tokens and the input speech into consideration. During inference, we start from all masked tokens and the network iteratively predicts missing tokens based on partial results. A new decoding strategy is proposed as an example, which starts from the most confident predictions to the rest. Results on Mandarin (AISHELL), Japanese (CSJ), English (LibriSpeech) benchmarks show promising results to train such a non-autoregressive network for ASR. Especially in AISHELL, the proposed method outperformed the Kaldi ASR system and matched the performance of the state-of-the-art autoregressive transformer with 7\times speedup.

Original languageEnglish
Article number9292943
Pages (from-to)121-125
Number of pages5
JournalIEEE Signal Processing Letters
Volume28
DOIs
Publication statusPublished - 2021
Externally publishedYes

Keywords

  • Neural networks
  • non-autoregressive
  • speech recognition

ASJC Scopus subject areas

  • Signal Processing
  • Electrical and Electronic Engineering
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Non-Autoregressive Transformer for Speech Recognition'. Together they form a unique fingerprint.

Cite this