Abstract
A technological method has been developed for intermodal mapping to generate robot motion from various sounds as well as to generate sounds from motions. The procedure consists of two phases, first the learning phase in which it observes some events together with associated sounds and then memorizes those sounds along with the motions of the sound source. Second phase is the interacting phase in which the robot receives limited sensory information from a single modality as input and associates this with different modality and expresses it. The recurrent-neural-network model with parametric bias (RNNPB) is applied that uses the current state-vector as input for outputting the next state-vector. The RNNPB model can self-organize the values that encode the input dynamics into special parametric-bias modes to reproduce he multimodal sensory flow.
Original language | English |
---|---|
Article number | 4475863 |
Pages (from-to) | 76-78 |
Number of pages | 3 |
Journal | IEEE Intelligent Systems |
Volume | 23 |
Issue number | 2 |
DOIs | |
Publication status | Published - 2008 Mar |
Externally published | Yes |
ASJC Scopus subject areas
- Computer Networks and Communications
- Artificial Intelligence