TY - GEN
T1 - Missing-feature based speech recognition for two simultaneous speech signals separated by ICA with a pair of humanoid ears
AU - Takeda, Ryu
AU - Yamamoto, Shun'ichi
AU - Komatani, Kazunori
AU - Ogata, Tetsuya
AU - Okuno, Hiroshi G.
PY - 2006/12/1
Y1 - 2006/12/1
N2 - Robot audition is a critical technology in making robots symbiosis with people. Since we hear a mixture of sounds in our daily lives, sound source localization and separation, and recognition of separated sounds are three essential capabilities. Sound source localization has been recently studied well for robots, while the other capabilities still need extensive studies. This paper reports the robot audition system with a pair of omni-directional microphones embedded in a humanoid to recognize two simultaneous talkers. It first separates sound sources by Independent Component Analysis (ICA) with single-input multiple-output (SIMO) model. Then, spectral distortion for separated sounds is estimated to identify reliable and unreliable components of the spectrogram. This estimation generates the missing feature masks as spectrographic masks. These masks are then used to avoid influences caused by spectral distortion in automatic speech recognition based on missing-feature method. The novel ideas of our system reside in estimates of spectral distortion of temporal-frequency domain in terms of feature vectors. In addition, we point out that the voice-activity detection (VAD) is effective to overcome the weak point of ICA against the changing number of talkers. The resulting system outperformed the baseline robot audition system by 15 %.
AB - Robot audition is a critical technology in making robots symbiosis with people. Since we hear a mixture of sounds in our daily lives, sound source localization and separation, and recognition of separated sounds are three essential capabilities. Sound source localization has been recently studied well for robots, while the other capabilities still need extensive studies. This paper reports the robot audition system with a pair of omni-directional microphones embedded in a humanoid to recognize two simultaneous talkers. It first separates sound sources by Independent Component Analysis (ICA) with single-input multiple-output (SIMO) model. Then, spectral distortion for separated sounds is estimated to identify reliable and unreliable components of the spectrogram. This estimation generates the missing feature masks as spectrographic masks. These masks are then used to avoid influences caused by spectral distortion in automatic speech recognition based on missing-feature method. The novel ideas of our system reside in estimates of spectral distortion of temporal-frequency domain in terms of feature vectors. In addition, we point out that the voice-activity detection (VAD) is effective to overcome the weak point of ICA against the changing number of talkers. The resulting system outperformed the baseline robot audition system by 15 %.
KW - Automatic speech recognition
KW - ICA
KW - Missing-feature methods
KW - Multiple speakers
KW - Robot audition
UR - http://www.scopus.com/inward/record.url?scp=34250689497&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=34250689497&partnerID=8YFLogxK
U2 - 10.1109/IROS.2006.281741
DO - 10.1109/IROS.2006.281741
M3 - Conference contribution
AN - SCOPUS:34250689497
SN - 142440259X
SN - 9781424402595
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 878
EP - 885
BT - 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2006
T2 - 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2006
Y2 - 9 October 2006 through 15 October 2006
ER -