Abstract
We propose a new Q-learning-based air-fuel ratio (AFR) controller for a Wankel rotary engine. We first present a mean-value engine model (MVEM) that is modified based on the rotary engine dynamics. The AFR regulation problem
is re-formulated as an optimal PI controller for fuel tracking over the augmented error dynamics. Leveraging the generalized-Hamilton-Jacobi-Bellman (GHJB) equation, we propose a new definition of the Q-function with its arguments being the augmented error and the injected fuel flow rate. We then derive
its Q-learning Bellman (QLB) equation based on the optimality principle. This allows online learning of a controller via an adaptive critic network for solving the QLB equation, of which the solution satisfies the GHJB equation. The proposed modelfree Q-learning-based controller is implemented on an AIE
225CS Wankel engine, where the practical experiments validate the optimality and performance of the proposed controller.
is re-formulated as an optimal PI controller for fuel tracking over the augmented error dynamics. Leveraging the generalized-Hamilton-Jacobi-Bellman (GHJB) equation, we propose a new definition of the Q-function with its arguments being the augmented error and the injected fuel flow rate. We then derive
its Q-learning Bellman (QLB) equation based on the optimality principle. This allows online learning of a controller via an adaptive critic network for solving the QLB equation, of which the solution satisfies the GHJB equation. The proposed modelfree Q-learning-based controller is implemented on an AIE
225CS Wankel engine, where the practical experiments validate the optimality and performance of the proposed controller.
Original language | English |
---|---|
Journal | IEEE Transactions on Control Systems Technology |
Publication status | Accepted/In press - 19 Dec 2024 |
Keywords
- Adaptive optimal control
- air-fuel ratio (AFR) control
- reinforcement learning
- rotary engines
- adaptive critic