Unified auditory functions based on Bayesian topic model

Takuma Otsuka*, Katsuhiko Ishiguro, Hiroshi Sawada, Hiroshi G. Okuno

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Citations (Scopus)

Abstract

Existing auditory functions for robots such as sound source localization and separation have been implemented in a cascaded framework whose overall performance may be degraded by any failure in its subsystems. These approaches often require a careful and environment-dependent tuning for each subsystems to achieve better performance. This paper presents a unified framework for sound source localization and separation where the whole system is integrated as a Bayesian topic model. This method improves both localization and separation with a common configuration under various environments by iterative inference using Gibbs sampling. Experimental results from three environments of different reverberation times confirm that our method outperforms state-of-the-art sound source separation methods, especially in the reverberant environments, and shows localization performance comparable to that of the existing robot audition system.

Original languageEnglish
Title of host publicationIEEE International Conference on Intelligent Robots and Systems
Pages2370-2376
Number of pages7
DOIs
Publication statusPublished - 2012
Externally publishedYes
Event25th IEEE/RSJ International Conference on Robotics and Intelligent Systems, IROS 2012 - Vilamoura, Algarve
Duration: 2012 Oct 72012 Oct 12

Other

Other25th IEEE/RSJ International Conference on Robotics and Intelligent Systems, IROS 2012
CityVilamoura, Algarve
Period12/10/712/10/12

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Computer Vision and Pattern Recognition
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Unified auditory functions based on Bayesian topic model'. Together they form a unique fingerprint.

Cite this