Model-based Deep Reinforcement Learning for Active Control of Flow around a Circular Cylinder Using Action-informed Episode-based Neural Ordinary Differential Equations

Yiqian Mao, Shan Zhong, Hujun Yin

Research output: Contribution to journalArticlepeer-review

42 Downloads (Pure)

Abstract

Recent applications of deep reinforcement learning (DRL) to active flow control (AFC) predominantly utilize model-free DRL, which interacts with physical systems represented by computational fluid dynamics solvers. However, high computational demands and tendency of numerical divergence can significantly compromise the effectiveness of model-free DRL as the Reynolds number (Re) increases. This study presents an inaugural application of model-based DRL to control vortex shedding from a two-dimensional circular cylinder using two synthetic jet actuators at Re = 100. An action-informed episode-based neural ordinary differential equations (AENODE) method is developed to mitigate the error cascading in the existing studies which typically adopt a timestep-based NODE (TNODE). Both methods are amalgamated with three feature extraction approaches, sensor placement, proper orthogonal decomposition and autoencoders, to construct six low-dimensional dynamical models (LDMs) of DRL environment. Compared to TNODE, AENODE reduced the prediction error by more than 90% and exhibited more robust convergence in training the agents throughout repeated runs. Furthermore, the model-based DRL agents identified very similar control strategies to that by model-free DRL. While the AENODE agents achieved 66.2%-72.4% of the reward obtained by model-free DRL and TNODE achieved 43.4%-54.7%. Moreover, implementing model-based DRL for AFC required only 10% of the data size and 14%-33% of the total wall-clock time required by model-free DRL of which less than 1% was spent on training agents. The significant saving in computational costs and reduced risks in numerical divergence will enable DRL aided AFC to be applied to more complex flow scenarios occurring at higher Reynolds numbers.
Original languageEnglish
Article number083619
JournalPhysics of Fluids
Volume36
Issue number8
DOIs
Publication statusPublished - 21 Aug 2024

Keywords

  • Flow control
  • Reinforcement learning
  • Reduced Order Modeling
  • Neural ordinary differential equations
  • Proper orthogonal decomposition
  • Autoencoder

Fingerprint

Dive into the research topics of 'Model-based Deep Reinforcement Learning for Active Control of Flow around a Circular Cylinder Using Action-informed Episode-based Neural Ordinary Differential Equations'. Together they form a unique fingerprint.

Cite this