TY - GEN
T1 - Segmenting acoustic signal with articulatory movement using recurrent neural network for phoneme acquisition
AU - Kanda, Hisashi
AU - Ogata, Tetsuya
AU - Komatani, Kazunori
AU - Okuno, Hiroshi G.
PY - 2008/12/1
Y1 - 2008/12/1
N2 - This paper proposes a computational model for phoneme acquisition by infants. Human infants perceive speech sounds not as discrete phoneme sequences but as continuous acoustic signals. One of critical problems in phoneme acquisition is the design for segmenting these continuous speech sounds. The key idea to solve this problem is that articulatory mechanisms such as the vocal tract help human beings to perceive speech sound units corresponding to phonemes. That is, the ability to distinguish phonemes is learned by recognizing unstable points in the dynamics of continuous sound with articulatory movement. We have developed a vocal imitation system embodying the relationship between articulatory movements and sounds produced by the movements. To segment acoustic signal with articulatory movement, we apply the segmenting method to our system by Recurrent Neural Network with Parametric Bias (RNNPB). This method determines the multiple segmentation boundaries in a temporal sequence using the prediction error of the RNNPB model, and the PB values obtained by the method can be encoded as kind of phonemes. Our system was implemented by using a physical vocal tract model, called the Maeda model. Experimental results demonstrated that our system can self-organize the same phonemes in different continuous sounds. This suggests that our model reflects the process of phoneme acquisition.
AB - This paper proposes a computational model for phoneme acquisition by infants. Human infants perceive speech sounds not as discrete phoneme sequences but as continuous acoustic signals. One of critical problems in phoneme acquisition is the design for segmenting these continuous speech sounds. The key idea to solve this problem is that articulatory mechanisms such as the vocal tract help human beings to perceive speech sound units corresponding to phonemes. That is, the ability to distinguish phonemes is learned by recognizing unstable points in the dynamics of continuous sound with articulatory movement. We have developed a vocal imitation system embodying the relationship between articulatory movements and sounds produced by the movements. To segment acoustic signal with articulatory movement, we apply the segmenting method to our system by Recurrent Neural Network with Parametric Bias (RNNPB). This method determines the multiple segmentation boundaries in a temporal sequence using the prediction error of the RNNPB model, and the PB values obtained by the method can be encoded as kind of phonemes. Our system was implemented by using a physical vocal tract model, called the Maeda model. Experimental results demonstrated that our system can self-organize the same phonemes in different continuous sounds. This suggests that our model reflects the process of phoneme acquisition.
UR - http://www.scopus.com/inward/record.url?scp=69549131076&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=69549131076&partnerID=8YFLogxK
U2 - 10.1109/IROS.2008.4651060
DO - 10.1109/IROS.2008.4651060
M3 - Conference contribution
AN - SCOPUS:69549131076
SN - 9781424420582
T3 - 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS
SP - 1712
EP - 1717
BT - 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS
T2 - 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS
Y2 - 22 September 2008 through 26 September 2008
ER -