Abstract
We propose a developmental method that enables a robot to identify visual locations associated with its own body from a cluttered visual image based on the concept of visuomotor predictors. A set of statistical predictors are trained by linear regression to predict the visual features at each visual location from proprioceptive input. By measuring each predictor's predictability using the R2 statistics, the algorithm can determine which visual locations correspond to the robot's body parts. Visual features are extracted using biologically plausible visual motion processing models. We demonstrate that while both orientation selective and motion selective visual features can be used for self-identification, motion selective features are more robust to changes in appearance. © 2012 IEEE.
Original language | English |
---|---|
Title of host publication | 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL 2012|IEEE Int. Conf. Dev. Learn. Epigenetic Rob., ICDL |
DOIs | |
Publication status | Published - 2012 |
Event | 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL 2012 - "San Diego,CA" Duration: 1 Jul 2012 → … |
Conference
Conference | 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL 2012 |
---|---|
City | "San Diego,CA" |
Period | 1/07/12 → … |