Epipolar geometry based sound localization and extraction for humanoid audition

Kazuhiro Nakadai, Hiroshi G. Okuno*, Hiroaki Kitano

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

43 Citations (Scopus)

Abstract

Sound localization for a robot or an embedded system is usually solved by using Interaural Phase Difference (IPD) and Interaural Intensity Difference (IID). These values are calculated by using Head-Related Transfer Function (HRTF). However, HRTF depends on the shape of head and also changes as environments changes. Therefore, sound localization without HRTF is needed for real-world applications. In this paper, we present a new sound localization method based on auditory epipolar geometry with motion control. Auditory epipolar geometry is an extension of epipolar geometry in stereo vision to audition, and auditory and visual epipolar geometry can share the sound source direction. The key idea is to exploit additional inputs obtained by motor control in order to compensate damages in the IPD and IID caused by reverberation of the room and the body of a robot. The proposed system can localize and extract simultaneous two sound sources in a real-world room.

Original languageEnglish
Title of host publicationIEEE International Conference on Intelligent Robots and Systems
Pages1395-1401
Number of pages7
Volume3
Publication statusPublished - 2001
Externally publishedYes
Event2001 IEEE/RSJ International Conference on Intelligent Robots and Systems - Maui, HI
Duration: 2001 Oct 292001 Nov 3

Other

Other2001 IEEE/RSJ International Conference on Intelligent Robots and Systems
CityMaui, HI
Period01/10/2901/11/3

Keywords

  • Active audition
  • Humanoid
  • Localization
  • Sensor fusion

ASJC Scopus subject areas

  • Control and Systems Engineering

Fingerprint

Dive into the research topics of 'Epipolar geometry based sound localization and extraction for humanoid audition'. Together they form a unique fingerprint.

Cite this