Supporting non-native speakers’ listening comprehension with automated transcripts

Xun Cao*, Naomi Yamashita, Toru Ishida

*この研究の対応する著者

研究成果: Chapter

抄録

Various language services exist to support the listening comprehension of non-native speakers (NNSs). One important service is to provide NNSs with real-time transcripts generated by automatic speech recognition (ASR) technologies. The goal of our research is to explore the effects of ASR transcripts on the listening comprehension of NNSs and consider how to support NNSs with ASR transcripts more effectively. To reach our goal, we ran three studies. The first study investigates the comprehension problems faced by NNSs, and the second study examines how ASR transcripts impact their listening comprehension, e.g., what types of comprehension problems could and could not be solved by reading ASR transcripts. Finally, the third study explores the potential of using eye-tracking data to detect their comprehension problems. Our data analysis identified thirteen types of listening comprehension problems. ASR transcripts helped the NNSs solve certain problems, e.g., “failed to recognize words they know.” However, the transcripts did not solve problems such as “lack of vocabulary,” and indeed NNS burden was increased. Results also show that from eye-tracking data we can make reasonably accurate predictions (83.8%) about the types of problems encountered by NNSs. Our findings provide insight into ways of designing real-time adaptive support systems for NNSs.

本文言語English
ホスト出版物のタイトルCognitive Technologies
出版社Springer Verlag
ページ157-173
ページ数17
9789811077920
DOI
出版ステータスPublished - 2018
外部発表はい

出版物シリーズ

名前Cognitive Technologies
番号9789811077920
ISSN(印刷版)1611-2482

ASJC Scopus subject areas

  • ソフトウェア
  • 人工知能

フィンガープリント

「Supporting non-native speakers’ listening comprehension with automated transcripts」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル