Integrating detailed information into a language model

Ruiqiang Zhang, Ezra Black, Andrew Finch, Yoshinori Sagisaka

研究成果: Conference contribution

6 被引用数 (Scopus)

抄録

Applying natural language processing technique to language modeling is a key problem in speech recognition. This paper describes a maximum entropy-based approach to language modeling in which both words together with syntactic and semantic tags in the long history are used as a basis for complex linguistic questions. These questions are integrated with a standard trigram language model or a standard trigram language model combined with long history word triggers and the resulting language model is used to rescore the N-best hypotheses output of the ATRSPREC speech recognition system. The technique removed 24% of the correctable error of the recognition system.

本文言語English
ホスト出版物のタイトルSpeech Processing II
出版社Institute of Electrical and Electronics Engineers Inc.
ページ1595-1598
ページ数4
ISBN(電子版)0780362934
DOI
出版ステータスPublished - 2000
外部発表はい
イベント25th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2000 - Istanbul, Turkey
継続期間: 2000 6月 52000 6月 9

出版物シリーズ

名前ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
3
ISSN(印刷版)1520-6149

Conference

Conference25th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2000
国/地域Turkey
CityIstanbul
Period00/6/500/6/9

ASJC Scopus subject areas

  • ソフトウェア
  • 信号処理
  • 電子工学および電気工学

フィンガープリント

「Integrating detailed information into a language model」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル