抄録
In this paper, we propose and develop a real-time audio-visual automatic continuous speech recognition system. The system utilizes live speech signals and facial images that collected from a microphone and a camera. Optical-flow-based features are used as visual feature. VAD technology and lip tracking are utilized to improve recognition accuracy. In this paper, several experiments are conducted using Japanese connected digit speech contaminated with white noise, music, television news and car engine noise. Experimental results show when the user is listening news or in a running car with window open the recognition accuracy of the proposed system are not enough. The accuracy of the proposed system is high at a place with light music or in a running car with window close.
本文言語 | English |
---|---|
出版ステータス | Published - 2010 |
外部発表 | はい |
イベント | 2010 International Conference on Auditory-Visual Speech Processing, AVSP 2010 - Hakone, Japan 継続期間: 2010 9月 30 → 2010 10月 3 |
Conference
Conference | 2010 International Conference on Auditory-Visual Speech Processing, AVSP 2010 |
---|---|
国/地域 | Japan |
City | Hakone |
Period | 10/9/30 → 10/10/3 |
ASJC Scopus subject areas
- 言語および言語学
- 言語聴覚療法
- 耳鼻咽喉科学