Abstract
In this paper we present a neural architecture to learn a bi-directional mapping between actions and language. We implement a Multiple Timescale Long Short-Term Memory (MT-LSTM) network comprised of 7 layers with different timescale factors, to connect actions to language without explicitly learning an intermediate representation. Instead, the model self-organizes such representations at the level of a slow-varying latent layer, linking action branch and language branch at the center. We train the model in a bi-directional way, learning how to produce a sentence from a certain action sequence input and, simultaneously, how to generate an action sequence given a sentence as input. Furthermore we show this model preserves some of the generalization behaviour of Multiple Timescale Recurrent Neural Networks (MTRNN) in generating sentences and actions that were not explicitly trained. We compare this model with a number of different baseline models, confirming the importance of both the bi-directional training and the multiple timescales architecture. Finally, the network was evaluated on motor actions performed by an iCub robot and their corresponding letter-based description. The results of these experiments are presented at the end of the paper.
Original language | English |
---|---|
Title of host publication | 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems |
Publication status | Accepted/In press - 20 Jun 2019 |
Event | 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems - Macao, China Duration: 4 Nov 2019 → 8 Nov 2019 |
Conference
Conference | 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems |
---|---|
Abbreviated title | IROS 2019 |
Country/Territory | China |
City | Macao |
Period | 4/11/19 → 8/11/19 |