Abstract
In this paper, we propose a multi-modal voice activity detection system (VAD) that uses audio and visual information. Audio-only VAD systems typically are not robust to (acoustic) noise. Incorporating visual information, for example information extracted from mouth images, can improve the robustness since the visual information is not affected by the acoustic noise. In multi-modal (speech) signal processing, there are two methods for fusing the audio and the visual information: concatenating the audio and visual features, and employing audio-only and visual-only classifiers, then fusing the unimodal decisions. We investigate the effectiveness of these methods and also compare model-based and model-free methods for VAD. Experimental results show feature fusion methods to generally be more effective, and decision fusion methods generally perform better using model-free methods.
Original language | English |
---|---|
Pages | 151-154 |
Number of pages | 4 |
Publication status | Published - 2009 |
Externally published | Yes |
Event | 2009 International Conference on Auditory-Visual Speech Processing, AVSP 2009 - Norwich, United Kingdom Duration: 2009 Sept 10 → 2009 Sept 13 |
Conference
Conference | 2009 International Conference on Auditory-Visual Speech Processing, AVSP 2009 |
---|---|
Country/Territory | United Kingdom |
City | Norwich |
Period | 09/9/10 → 09/9/13 |
Keywords
- AVVAD
- multi-modal
- voice activity detection
ASJC Scopus subject areas
- Language and Linguistics
- Speech and Hearing
- Otorhinolaryngology