Robust Reinforcement Learning in Continuous Control Tasks with Uncertainty Set Regularization

Yuan Zhang, Jianhong Wang, Joschka Boedecker

Research output: Contribution to conferencePaperpeer-review

Abstract

Reinforcement learning (RL) is recognized as lacking generalization and robustness under environmental perturbations, which excessively restricts its application for real-world robotics. Prior work claimed that adding regularization to the value function is equivalent to learning a robust policy under uncertain transitions. Although the regularization-robustness transformation is appealing for its simplicity and efficiency, it is still lacking in continuous control tasks. In this paper, we propose a new regularizer named Uncertainty Set Regularizer (USR), to formulate the uncertainty set on the parametric space of a transition function. To deal with unknown uncertainty sets, we further propose a novel adversarial approach to generate them based on the value function. We evaluate USR on the Real-world Reinforcement Learning (RWRL) benchmark and the Unitree A1 Robot, demonstrating improvements in the robust performance of perturbed testing environments and sim-to-real scenarios.
Original languageEnglish
Pages1400-1424
Number of pages25
Publication statusPublished - 2023
Event7th Conference on Robot Learning, CoRL 2023 - Atlanta, United States
Duration: 6 Nov 20239 Nov 2023

Conference

Conference7th Conference on Robot Learning, CoRL 2023
Country/TerritoryUnited States
Period6/11/239/11/23

Fingerprint

Dive into the research topics of 'Robust Reinforcement Learning in Continuous Control Tasks with Uncertainty Set Regularization'. Together they form a unique fingerprint.

Cite this