TY - GEN
T1 - 3D human head geometry estimation from a speech
AU - Maejima, Akinobu
AU - Morishima, Shigeo
PY - 2012/9/6
Y1 - 2012/9/6
N2 - We can visualize acquaintances' appearance by just hearing their voice if we have met them in past few years. Thus, it would appear that some relationships exist in between voice and appearance. If 3D head geometry could be estimated from a voice, we can realize some applications (e.g, avatar generation, character modeling for video game, etc.). Previously, although many researchers have been reported about a relationship between acoustic features of a voice and its corresponding dynamical visual features including lip, tongue, and jaw movements or vocal articulation during a speech, however, there have been few reports about a relationship between acoustic features and static 3D head geometry. In this paper, we focus on estimating 3D head geometry from a voice. Acoustic features vary depending on a speech context and its intonation. Therefore we restrict a context to Japanese 5 vowels. Under this assumption, to estimate 3D head geometry, we use a Feedforward Neural Network (FNN) trained by using a correspondence between an individual acoustic features extracted from a Japanese vowel and 3D head geometry generated based on a 3D range scan. The performance of our method is shown by both closed and open tests. As a result, we found that 3D head geometry which is acoustically similar to an input voice could be estimated under the limited condition.
AB - We can visualize acquaintances' appearance by just hearing their voice if we have met them in past few years. Thus, it would appear that some relationships exist in between voice and appearance. If 3D head geometry could be estimated from a voice, we can realize some applications (e.g, avatar generation, character modeling for video game, etc.). Previously, although many researchers have been reported about a relationship between acoustic features of a voice and its corresponding dynamical visual features including lip, tongue, and jaw movements or vocal articulation during a speech, however, there have been few reports about a relationship between acoustic features and static 3D head geometry. In this paper, we focus on estimating 3D head geometry from a voice. Acoustic features vary depending on a speech context and its intonation. Therefore we restrict a context to Japanese 5 vowels. Under this assumption, to estimate 3D head geometry, we use a Feedforward Neural Network (FNN) trained by using a correspondence between an individual acoustic features extracted from a Japanese vowel and 3D head geometry generated based on a 3D range scan. The performance of our method is shown by both closed and open tests. As a result, we found that 3D head geometry which is acoustically similar to an input voice could be estimated under the limited condition.
UR - http://www.scopus.com/inward/record.url?scp=84865628033&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84865628033&partnerID=8YFLogxK
U2 - 10.1145/2342896.2342997
DO - 10.1145/2342896.2342997
M3 - Conference contribution
AN - SCOPUS:84865628033
SN - 9781450316828
T3 - ACM SIGGRAPH 2012 Posters, SIGGRAPH'12
BT - ACM SIGGRAPH 2012 Posters, SIGGRAPH'12
T2 - ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH'12
Y2 - 5 August 2012 through 9 August 2012
ER -