Abstract
In order to achieve visual-guided object manipulation tasks via learning by example, the current neuro-robotics study considers integration of two essential mechanisms of visual attention and arm/hand movement and their adaptive coordination. The present study proposes a new dynamic neural network model in which visual attention and motor behavior are associated with task specific manners by learning with selforganizing functional hierarchy required for the cognitive tasks. The top-down visual attention provides a goal-directed shift sequence in a visual scan path and it can guide a generation of a motor plan for hand movement during action by reinforcement and inhibition learning. The proposed model can automatically generate the corresponding goal-directed actions with regards to the current sensory states including visual stimuli and body postures. The experiments show that developmental learning from basic actions to combinational ones can achieve certain generalizations in learning by which some novel behaviors without prior learning can be successfully generated.
Original language | English |
---|---|
Title of host publication | 2010 IEEE 9th International Conference on Development and Learning, ICDL-2010 - Conference Program |
Pages | 165-170 |
Number of pages | 6 |
DOIs | |
Publication status | Published - 2010 |
Externally published | Yes |
Event | 2010 IEEE 9th International Conference on Development and Learning, ICDL-2010 - Ann Arbor, MI Duration: 2010 Aug 18 → 2010 Aug 21 |
Other
Other | 2010 IEEE 9th International Conference on Development and Learning, ICDL-2010 |
---|---|
City | Ann Arbor, MI |
Period | 10/8/18 → 10/8/21 |
Keywords
- Action generator
- Object manipulation task
- Shift sequence in visual scan path
ASJC Scopus subject areas
- Human-Computer Interaction
- Software
- Education