Dictation of Multiparty Conversation Considering Speaker Individuality and Turn Taking

Noriyuki Murai*, Tetsunori Kobayashi

*この研究の対応する著者

研究成果: Article査読

2 被引用数 (Scopus)

抄録

This paper discusses an algorithm that recognizes multiparty speech with complex turn taking. In recognition of the conversation of multiple speakers, it is necessary to know not only what is spoken, as in the conventional system, but also who spoke up to what point. The purpose of this paper is to find a method to solve this problem. The representation of the likelihood of turn taking is included in the language model in the continuous speech recognition system, and the speech properties of each speaker are represented by a statistical model. Using this approach, two algorithms are proposed that estimate simultaneously and in parallel the speaker and the speech content. Recognition experiments using conversation in TV sports news show that the proposed method can correct a maximum of 29.5% of the errors in the recognition of speech content and 93.0% of the errors in recognition of the speaker.

本文言語English
ページ(範囲)103-111
ページ数9
ジャーナルSystems and Computers in Japan
34
13
DOI
出版ステータスPublished - 2003 11月 30

ASJC Scopus subject areas

  • 理論的コンピュータサイエンス
  • 情報システム
  • ハードウェアとアーキテクチャ
  • 計算理論と計算数学

フィンガープリント

「Dictation of Multiparty Conversation Considering Speaker Individuality and Turn Taking」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル