Abstract
Learning fine-grained movements is a challenging topic in robotics, particularly in the context of robotic hands. One specific instance of this challenge is the acquisition of fingerspelling sign language in robots. In this paper, we propose an approach for learning dexterous motor imitation from video examples without additional information. To achieve this, we first build a URDF model of a robotic hand with a single actuator for each joint. We then leverage pre-trained deep vision models to extract the 3D pose of the hand from RGB videos. Next, using state-of-the-art reinforcement learning algorithms for motion imitation (namely, proximal policy optimization and soft actor-critic), we train a policy to reproduce the movement extracted from the demonstrations. We identify the optimal set of hyperparameters for imitation based on a reference motion. Finally, we demonstrate the generalizability of our approach by testing it on six different tasks, corresponding to fingerspelled letters. Our results show that our approach is able to successfully imitate these fine-grained movements without additional information, highlighting its potential for real-world applications in robotics.
Original language | English |
---|---|
Number of pages | 7 |
DOIs | |
Publication status | Published - 2023 |
Event | 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) - Busan, Korea, Republic of Duration: 28 Aug 2023 → 31 Aug 2023 Conference number: 32nd |
Conference
Conference | 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) |
---|---|
Abbreviated title | RO-MAN 2023 |
Country/Territory | Korea, Republic of |
City | Busan |
Period | 28/08/23 → 31/08/23 |