Massively multilingual adversarial speech recognition

Oliver Adams, Matthew Wiesner, Shinji Watanabe, David Yarowsky

研究成果: Conference contribution

42 被引用数 (Scopus)

抄録

We report on adaptation of multilingual end-to-end speech recognition models trained on as many as 100 languages. Our findings shed light on the relative importance of similarity between the target and pretraining languages along the dimensions of phonetics, phonology, language family, geographical location, and orthography. In this context, experiments demonstrate the effectiveness of two additional pretraining objectives in encouraging language-independent encoder representations: a context-independent phoneme objective paired with a language-adversarial classification objective.

本文言語English
ホスト出版物のタイトルLong and Short Papers
出版社Association for Computational Linguistics (ACL)
ページ96-108
ページ数13
ISBN(電子版)9781950737130
出版ステータスPublished - 2019
外部発表はい
イベント2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2019 - Minneapolis, United States
継続期間: 2019 6月 22019 6月 7

出版物シリーズ

名前NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference
1

Conference

Conference2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2019
国/地域United States
CityMinneapolis
Period19/6/219/6/7

ASJC Scopus subject areas

  • 言語および言語学
  • コンピュータ サイエンスの応用
  • 言語学および言語

フィンガープリント

「Massively multilingual adversarial speech recognition」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル