Abstract
Lip reading technologies play a great role not only in image pattern recognition e.g. computer vision, but also in audio-visual pattern recognition e.g. bimodal speech recognition. However, it is a problem that the recognition accuracy is still significantly low, compared to that of speech recognition. Another problem lies which the performance degradation occurs in real environments. To improve the performance, in this paper we employ two adaptation schemes: speaker adaptation and environmental adaptation. The speaker adaptation is performed to recognition models so as to prevent the degradation caused by the difference of speakers. The environmental adaptation is also conducted to deal with environmental differences. We tested these adaptation schemes using a real-world audio-visual corpus CENSREC-2-AV, we have built this corpus containing real-world data (speech signals and lip images) recorded in a driving car, in which subjects uttered Japanese connected digits. Experimental results show that the lip reading recognition performance was largely improved by the speaker adaptation, and further recovered by the environmental adaptation.
Original language | English |
---|---|
Pages | 346-350 |
Number of pages | 5 |
DOIs | |
Publication status | Published - 2013 |
Externally published | Yes |
Event | 2013 2nd IAPR Asian Conference on Pattern Recognition, ACPR 2013 - Naha, Okinawa, Japan Duration: 2013 Nov 5 → 2013 Nov 8 |
Conference
Conference | 2013 2nd IAPR Asian Conference on Pattern Recognition, ACPR 2013 |
---|---|
Country/Territory | Japan |
City | Naha, Okinawa |
Period | 13/11/5 → 13/11/8 |
Keywords
- Environmental adaptation
- Lip reading
- Real environment
- Speaker adaptation
ASJC Scopus subject areas
- Computer Vision and Pattern Recognition