Deep reinforcement learning control approach to mitigating actuator attacks

Chengwei Wu, Wei Pan, Rick Staa, Jianxing Liu, Guanghui Sun, Ligang Wu

Research output: Contribution to journalArticlepeer-review

Abstract

This paper investigates the deep reinforcement learning based secure control problem for cyber–physical systems (CPS) under false data injection attacks. We describe the CPS under attacks as a Markov decision process (MDP), based on which the secure controller design for CPS under attacks is formulated as an action policy learning using data. Rendering the soft actor–critic learning algorithm, a Lyapunov-based soft actor–critic learning algorithm is proposed to offline train a secure policy for CPS under attacks. Different from the existing results, not only the convergence of the learning algorithm but the stability of the system using the learned policy is proved, which is quite important for security and stability-critical applications. Finally, both a satellite attitude control system and a robot arm system are used to show the effectiveness of the proposed scheme, and comparisons between the proposed learning algorithm and the classical PD controller are also provided to demonstrate the advantages of the control algorithm designed in this paper.

Original languageEnglish
Article number110999
JournalAutomatica
Volume152
Early online date31 Mar 2023
DOIs
Publication statusPublished - 1 Jun 2023

Keywords

  • Cyber–physical systems
  • Deep reinforcement learning
  • False data injection attacks
  • Lyapunov stability

Fingerprint

Dive into the research topics of 'Deep reinforcement learning control approach to mitigating actuator attacks'. Together they form a unique fingerprint.

Cite this