Structural bayesian linear regression for hidden markov models

Shinji Watanabe*, Atsushi Nakamura, Biing Hwang Juang

*この研究の対応する著者

研究成果: Article査読

11 被引用数 (Scopus)

抄録

Linear regression for Hidden Markov Model (HMM) parameters is widely used for the adaptive training of time series pattern analysis especially for speech processing. The regression parameters are usually shared among sets of Gaussians in HMMs where the Gaussian clusters are represented by a tree. This paper realizes a fully Bayesian treatment of linear regression for HMMs considering this regression tree structure by using variational techniques. This paper analytically derives the variational lower bound of the marginalized log-likelihood of the linear regression. By using the variational lower bound as an objective function, we can algorithmically optimize the tree structure and hyper-parameters of the linear regression rather than heuristically tweaking them as tuning parameters. Experiments on large vocabulary continuous speech recognition confirm the generalizability of the proposed approach, especially when the amount of adaptation data is limited.

本文言語English
ページ(範囲)341-358
ページ数18
ジャーナルJournal of Signal Processing Systems
74
3
DOI
出版ステータスPublished - 2014 3月
外部発表はい

ASJC Scopus subject areas

  • 制御およびシステム工学
  • 理論的コンピュータサイエンス
  • 信号処理
  • 情報システム
  • モデリングとシミュレーション
  • ハードウェアとアーキテクチャ

フィンガープリント

「Structural bayesian linear regression for hidden markov models」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル