Q-Learning-Based Optimal Control via Adaptive Critic Network for a Wankel Rotary Engine

Anthony Siming Chen, Guido Herrmann, Reza Islam, Chris Brace, James W.G. Turner, Stuart Burgess

Research output: Contribution to journalArticlepeer-review

Abstract

We propose a new Q-learning-based air-fuel ratio (AFR) controller for a Wankel rotary engine. We first present a mean-value engine model (MVEM) that is modified based on the rotary engine dynamics. The AFR regulation problem
is re-formulated as an optimal PI controller for fuel tracking over the augmented error dynamics. Leveraging the generalized-Hamilton-Jacobi-Bellman (GHJB) equation, we propose a new definition of the Q-function with its arguments being the augmented error and the injected fuel flow rate. We then derive
its Q-learning Bellman (QLB) equation based on the optimality principle. This allows online learning of a controller via an adaptive critic network for solving the QLB equation, of which the solution satisfies the GHJB equation. The proposed modelfree Q-learning-based controller is implemented on an AIE
225CS Wankel engine, where the practical experiments validate the optimality and performance of the proposed controller.
Original languageEnglish
JournalIEEE Transactions on Control Systems Technology
Publication statusAccepted/In press - 19 Dec 2024

Keywords

  • Adaptive optimal control
  • air-fuel ratio (AFR) control
  • reinforcement learning
  • rotary engines
  • adaptive critic

Fingerprint

Dive into the research topics of 'Q-Learning-Based Optimal Control via Adaptive Critic Network for a Wankel Rotary Engine'. Together they form a unique fingerprint.

Cite this