Bayesian extension of MUSIC for sound source localization and tracking

Takuma Otsuka*, Kazuhiro Nakadai, Tetsuya Ogata, Hiroshi G. Okuno

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

9 Citations (Scopus)

Abstract

This paper presents a Bayesian extension of MUSIC-based sound source localization (SSL) and tracking method. SSL is important for distant speech enhancement and simultaneous speech separation for improving speech recognition, as well as for auditory scene analysis by mobile robots. One of the draw- backs of existing SSL methods is the necessity of careful param- eter tunings, e.g., the sound source detection threshold depend- ing on the reverberation time and the number of sources. Our contribution consists of (1) automatic parameter estimation in the variational Bayesian framework and (2) tracking of sound sources with reliability. Experimental results demonstrate our method robustly tracks multiple sound sources in a reverberant environment with RT20 = 840 (ms).

Original languageEnglish
Pages (from-to)3109-3112
Number of pages4
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Publication statusPublished - 2011 Dec 1
Externally publishedYes
Event12th Annual Conference of the International Speech Communication Association, INTERSPEECH 2011 - Florence, Italy
Duration: 2011 Aug 272011 Aug 31

Keywords

  • MUSIC algorithm
  • Particle filter
  • Simultaneous sound source localization
  • Variational Bayes

ASJC Scopus subject areas

  • Language and Linguistics
  • Human-Computer Interaction
  • Signal Processing
  • Software
  • Modelling and Simulation

Fingerprint

Dive into the research topics of 'Bayesian extension of MUSIC for sound source localization and tracking'. Together they form a unique fingerprint.

Cite this