Sample-efficient Deep Reinforcement Learning with Imaginary Rollouts for Human-Robot Interaction

Mohammad Thabet, Massimiliano Patacchiola, Angelo Cangelosi

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Deep reinforcement learning has proven to be a great success in allowing agents to learn complex tasks. However, its application to actual robots can be prohibitively expensive. Furthermore, the unpredictability of human behavior
in human-robot interaction tasks can hinder convergence to a good policy. In this paper, we present an architecture that allows agents to learn models of stochastic environments and use them to accelerate learning. We descirbe how an environment model can be learned online and used to generate synthetic transitions, as well as how an agent can leverage these synthetic data to accelerate learning. We validate our approach using an experiment in which a robotic arm has to complete a task composed of a series of actions based on human gestures. Results show that our approach leads to significantly faster
learning, requiring much less interaction with the environment. Furthermore, we demonstrate how learned models can be used by a robot to produce optimal plans in real world applications.
Original languageEnglish
Title of host publication2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019)
Publication statusAccepted/In press - 20 Jun 2019
Event2019 IEEE/RSJ International Conference on Intelligent Robots and Systems - Macao, China
Duration: 4 Nov 20198 Nov 2019

Conference

Conference2019 IEEE/RSJ International Conference on Intelligent Robots and Systems
Abbreviated titleIROS 2019
Country/TerritoryChina
CityMacao
Period4/11/198/11/19

Fingerprint

Dive into the research topics of 'Sample-efficient Deep Reinforcement Learning with Imaginary Rollouts for Human-Robot Interaction'. Together they form a unique fingerprint.

Cite this