Learning Visual-Motor Cell Assemblies for the iCub Robot using a Neuroanatomically Grounded Neural Network

S.V. Adams, T. Wennekers, A. Cangelosi, M. Garagnani, F. Pulvermueller, IEEE [No Value]

Research output: Contribution to conferencePaperpeer-review

Abstract

In this work we describe how an existing neural model for learning Cell Assemblies (CAs) across multiple neuroanatomical brain areas has been integrated with a humanoid robot simulation to explore the learning of associations of visual and motor modalities. The results show that robust CAs are learned to enable pattern completion to select a correct motor response when only visual input is presented. We also show, with some parameter tuning and the pre-processing of more realistic patterns taken from images of real objects and robot poses the network can act as a controller for the robot in visuo-motor association tasks. This provides the basis for further neurorobotic experiments on grounded language learning.
Original languageEnglish
Pages1-8
Number of pages8
Publication statusPublished - 2014

Keywords

  • Neurorobotics
  • Cell Assemblies
  • Visual-Motor Learning

Fingerprint

Dive into the research topics of 'Learning Visual-Motor Cell Assemblies for the iCub Robot using a Neuroanatomically Grounded Neural Network'. Together they form a unique fingerprint.

Cite this