TY - JOUR
T1 - Model-Reference Reinforcement Learning for Collision-Free Tracking Control of Autonomous Surface Vehicles
AU - Zhang, Qingrui
AU - Pan, Wei
AU - Reppa, Vasso
N1 - Funding Information:
This work was supported by the Cohesion Project by the Faculty of Mechanical, Maritime and Material Engineering, Delft University of Technology
Publisher Copyright:
© 2000-2011 IEEE.
PY - 2022/7/1
Y1 - 2022/7/1
N2 - This paper presents a novel model-reference reinforcement learning algorithm for the intelligent tracking control of uncertain autonomous surface vehicles with collision avoidance. The proposed control algorithm combines a conventional control method with reinforcement learning to enhance control accuracy and intelligence. In the proposed control design, a nominal system is considered for the design of a baseline tracking controller using a conventional control approach. The nominal system also defines the desired behaviour of uncertain autonomous surface vehicles in an obstacle-free environment. Thanks to reinforcement learning, the overall tracking controller is capable of compensating for model uncertainties and achieving collision avoidance at the same time in environments with obstacles. In comparison to traditional deep reinforcement learning methods, our proposed learning-based control can provide stability guarantees and better sample efficiency. We demonstrate the performance of the new algorithm using an example of autonomous surface vehicles.
AB - This paper presents a novel model-reference reinforcement learning algorithm for the intelligent tracking control of uncertain autonomous surface vehicles with collision avoidance. The proposed control algorithm combines a conventional control method with reinforcement learning to enhance control accuracy and intelligence. In the proposed control design, a nominal system is considered for the design of a baseline tracking controller using a conventional control approach. The nominal system also defines the desired behaviour of uncertain autonomous surface vehicles in an obstacle-free environment. Thanks to reinforcement learning, the overall tracking controller is capable of compensating for model uncertainties and achieving collision avoidance at the same time in environments with obstacles. In comparison to traditional deep reinforcement learning methods, our proposed learning-based control can provide stability guarantees and better sample efficiency. We demonstrate the performance of the new algorithm using an example of autonomous surface vehicles.
KW - Autonomous surface vehicles
KW - collision avoidance
KW - control architecture
KW - reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85112054290&partnerID=8YFLogxK
U2 - 10.1109/TITS.2021.3086033
DO - 10.1109/TITS.2021.3086033
M3 - Article
SN - 1524-9050
VL - 23
SP - 8770
EP - 8781
JO - IEEE Transactions on Intelligent Transportation Systems
JF - IEEE Transactions on Intelligent Transportation Systems
IS - 7
ER -