抄録
Robots, in particular, mobile robots should listen to and recognize speeches with their own ears in a real world to attain smooth communications with people. This paper presents an active direction-pass filter (ADPF) that separates sounds originating from the specified direction by using a pair of microphones. Its application to front-end processing for speech recognition is also reported. Since the performance of sound source separation by the ADPF depends on the accuracy of sound source localization (direction), various localization modules including interaural phase difference (IPD), interaural intensity difference (IID) for each sub-band, other visual and auditory processing is integrated hierarchically. The resulting performance of auditory localization varies according to the relative position of sound source. The resolution of the center of the robot is much higher than that of peripherals, indicating similar property of visual fovea (high resolution in the center of human eye). To make the best use of this property, the ADPF controls the direction of a head by motor movement. In order to recognize sound streams separated by the ADPF, a Hidden Markov Model (HMM) based automatic speech recognition is built with multiple acoustic models trained by the output of the ADPF under different conditions. A preliminary dialog system is thus implemented on an upper-torso humanoid. The experimental results prove that it works well even when two speakers speak simultaneously.
本文言語 | English |
---|---|
ホスト出版物のタイトル | IEEE International Conference on Intelligent Robots and Systems |
ページ | 1320-1325 |
ページ数 | 6 |
巻 | 2 |
出版ステータス | Published - 2002 |
外部発表 | はい |
イベント | 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems - Lausanne 継続期間: 2002 9月 30 → 2002 10月 4 |
Other
Other | 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems |
---|---|
City | Lausanne |
Period | 02/9/30 → 02/10/4 |
ASJC Scopus subject areas
- 制御およびシステム工学