Goal Exploration via Adaptive Skill Distribution for Goal-Conditioned Reinforcement Learning

Lisheng Wu, Ke Chen

Research output: Other contribution

Abstract

Exploration efficiency poses a significant challenge in goal-conditioned reinforcement learning (GCRL) tasks, particularly those with long horizons and sparse rewards. A primary limitation to exploration efficiency is the agent's inability to leverage environmental structural patterns. In this study, we introduce a novel framework, GEASD, designed to capture these patterns through an adaptive skill distribution during the learning process. This distribution optimizes the local entropy of achieved goals within a contextual horizon, enhancing goal-spreading behaviors and facilitating deep exploration in states containing familiar structural patterns. Our experiments reveal marked improvements in exploration efficiency using the adaptive skill distribution compared to a uniform skill distribution. Additionally, the learned skill distribution demonstrates robust generalization capabilities, achieving substantial exploration progress in unseen tasks containing similar local structures.
Original languageEnglish
DOIs
Publication statusPublished - 19 Apr 2024

Fingerprint

Dive into the research topics of 'Goal Exploration via Adaptive Skill Distribution for Goal-Conditioned Reinforcement Learning'. Together they form a unique fingerprint.

Cite this