Structural bayesian linear regression for hidden markov models

Shinji Watanabe*, Atsushi Nakamura, Biing Hwang Juang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)

Abstract

Linear regression for Hidden Markov Model (HMM) parameters is widely used for the adaptive training of time series pattern analysis especially for speech processing. The regression parameters are usually shared among sets of Gaussians in HMMs where the Gaussian clusters are represented by a tree. This paper realizes a fully Bayesian treatment of linear regression for HMMs considering this regression tree structure by using variational techniques. This paper analytically derives the variational lower bound of the marginalized log-likelihood of the linear regression. By using the variational lower bound as an objective function, we can algorithmically optimize the tree structure and hyper-parameters of the linear regression rather than heuristically tweaking them as tuning parameters. Experiments on large vocabulary continuous speech recognition confirm the generalizability of the proposed approach, especially when the amount of adaptation data is limited.

Original languageEnglish
Pages (from-to)341-358
Number of pages18
JournalJournal of Signal Processing Systems
Volume74
Issue number3
DOIs
Publication statusPublished - 2014 Mar
Externally publishedYes

Keywords

  • Hidden Markov model
  • Linear regression
  • Structural prior
  • Variational bayes

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Theoretical Computer Science
  • Signal Processing
  • Information Systems
  • Modelling and Simulation
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Structural bayesian linear regression for hidden markov models'. Together they form a unique fingerprint.

Cite this