Exploring Deep Models for Comprehension of Deictic Gesture-Word Combinations in Cognitive Robotics

Gabriella Pizzuto, Angelo Cangelosi

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

193 Downloads (Pure)

Abstract

In the early stages of infant development, gestures and speech are integrated during language acquisition. Such a natural combination is therefore a desirable, yet challenging, goal for fluid human-robot interaction. To achieve this, we propose a multimodal deep learning architecture, for comprehension of complementary gesture-word combinations, implemented on an iCub humanoid robot. This enables human-assisted language learning, with interactions like pointing at a cup and labelling it with a vocal utterance. We evaluate various depths of the Mask Regional Convolutional Neural Network (for object and wrist detection) and the Residual Network (for gesture classification). Validation is carried out with two deictic gestures across ten real-world objects on frames recorded directly from the iCub’s cameras. Results further strengthen the potential of gesture-word combinations for robot language acquisition.
Original languageEnglish
Title of host publicationInternational Joint Conference on Neural Networks
DOIs
Publication statusPublished - 2019

Keywords

  • cognitive developmental robotics
  • embodied language acquisition

Fingerprint

Dive into the research topics of 'Exploring Deep Models for Comprehension of Deictic Gesture-Word Combinations in Cognitive Robotics'. Together they form a unique fingerprint.

Cite this