Dual supervised learning for non-native speech recognition

Kacper Radzikowski*, Robert Nowak, Le Wang, Osamu Yoshie


研究成果: Article査読

10 被引用数 (Scopus)


Current automatic speech recognition (ASR) systems achieve over 90–95% accuracy, depending on the methodology applied and datasets used. However, the level of accuracy decreases significantly when the same ASR system is used by a non-native speaker of the language to be recognized. At the same time, the volume of labeled datasets of non-native speech samples is extremely limited both in size and in the number of existing languages. This problem makes it difficult to train or build sufficiently accurate ASR systems targeted at non-native speakers, which, consequently, calls for a different approach that would make use of vast amounts of large unlabeled datasets. In this paper, we address this issue by employing dual supervised learning (DSL) and reinforcement learning with policy gradient methodology. We tested DSL in a warm-start approach, with two models trained beforehand, and in a semi warm-start approach with only one of the two models pre-trained. The experiments were conducted on English language pronounced by Japanese and Polish speakers. The results of our experiments show that creating ASR systems with DSL can achieve an accuracy comparable to traditional methods, while simultaneously making use of unlabeled data, which obviously is much cheaper to obtain and comes in larger sizes.

ジャーナルEurasip Journal on Audio, Speech, and Music Processing
出版ステータスPublished - 2019 12月 1

ASJC Scopus subject areas

  • 音響学および超音波学
  • 電子工学および電気工学


「Dual supervised learning for non-native speech recognition」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。