Abstract
In this work we describe how an existing neural model for learning Cell Assemblies (CAs) across multiple neuroanatomical brain areas has been integrated with a humanoid robot simulation to explore the learning of associations of visual and motor modalities. The results show that robust CAs are learned to enable pattern completion to select a correct motor response when only visual input is presented. We also show, with some parameter tuning and the pre-processing of more realistic patterns taken from images of real objects and robot poses the network can act as a controller for the robot in visuo-motor association tasks. This provides the basis for further neurorobotic experiments on grounded language learning.
Original language | English |
---|---|
Pages | 1-8 |
Number of pages | 8 |
Publication status | Published - 2014 |
Keywords
- Neurorobotics
- Cell Assemblies
- Visual-Motor Learning