A Bi-directional Multiple Timescales LSTM Model for Grounding of Actions and Verbs

Alexandre Antunes, Alban Laflaquière, Tetsuya Ogata, Angelo Cangelosi

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

Abstract

In this paper we present a neural architecture to learn a bi-directional mapping between actions and language. We implement a Multiple Timescale Long Short-Term Memory (MT-LSTM) network comprised of 7 layers with different timescale factors, to connect actions to language without explicitly learning an intermediate representation. Instead, the model self-organizes such representations at the level of a slow-varying latent layer, linking action branch and language branch at the center. We train the model in a bi-directional way, learning how to produce a sentence from a certain action sequence input and, simultaneously, how to generate an action sequence given a sentence as input. Furthermore we show this model preserves some of the generalization behaviour of Multiple Timescale Recurrent Neural Networks (MTRNN) in generating sentences and actions that were not explicitly trained. We compare this model with a number of different baseline models, confirming the importance of both the bi-directional training and the multiple timescales architecture. Finally, the network was evaluated on motor actions performed by an iCub robot and their corresponding letter-based description. The results of these experiments are presented at the end of the paper.
Original languageEnglish
Title of host publication2019 IEEE/RSJ International Conference on Intelligent Robots and Systems
Publication statusAccepted/In press - 20 Jun 2019
Event2019 IEEE/RSJ International Conference on Intelligent Robots and Systems - Macao, China
Duration: 4 Nov 20198 Nov 2019

Conference

Conference2019 IEEE/RSJ International Conference on Intelligent Robots and Systems
Abbreviated titleIROS 2019
Country/TerritoryChina
CityMacao
Period4/11/198/11/19

Fingerprint

Dive into the research topics of 'A Bi-directional Multiple Timescales LSTM Model for Grounding of Actions and Verbs'. Together they form a unique fingerprint.

Cite this