Development of robot self-identification based on visuomotor prediction

Tao Zhou, Piotr Dudek, Bertram E. Shi

    Research output: Chapter in Book/Conference proceedingConference contribution

    Abstract

    We propose a developmental method that enables a robot to identify visual locations associated with its own body from a cluttered visual image based on the concept of visuomotor predictors. A set of statistical predictors are trained by linear regression to predict the visual features at each visual location from proprioceptive input. By measuring each predictor's predictability using the R2 statistics, the algorithm can determine which visual locations correspond to the robot's body parts. Visual features are extracted using biologically plausible visual motion processing models. We demonstrate that while both orientation selective and motion selective visual features can be used for self-identification, motion selective features are more robust to changes in appearance. © 2012 IEEE.
    Original languageEnglish
    Title of host publication2012 IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL 2012|IEEE Int. Conf. Dev. Learn. Epigenetic Rob., ICDL
    DOIs
    Publication statusPublished - 2012
    Event2012 IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL 2012 - "San Diego,CA"
    Duration: 1 Jul 2012 → …

    Conference

    Conference2012 IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL 2012
    City"San Diego,CA"
    Period1/07/12 → …

    Fingerprint

    Dive into the research topics of 'Development of robot self-identification based on visuomotor prediction'. Together they form a unique fingerprint.

    Cite this