Abstract
This article proposes a data-efficient model-free reinforcement learning (RL) algorithm using Koopman operators for complex nonlinear systems. A high-dimensional data-driven optimal control of the nonlinear system is developed by lifting it into the linear system model. We use a data-driven model-based RL framework to derive an off-policy Bellman equation. Building upon this equation, we deduce the data-efficient RL algorithm, which does not need a Koopman-built linear system model. This algorithm preserves dynamic information while reducing the required data for optimal control learning. Numerical and theoretical analyses of the Koopman eigenfunctions for dataset truncation are discussed in the proposed model-free data-efficient RL algorithm. We validate our framework on the excitation control of the power system.
Original language | English |
---|---|
Pages (from-to) | 1391-1402 |
Number of pages | 12 |
Journal | IEEE Transactions on Cybernetics |
Volume | 54 |
Issue number | 3 |
DOIs | |
State | Published - Mar 1 2024 |
Externally published | Yes |
Keywords
- Data-driven control
- Koopman theory
- optimal control
- reinforcement learning (RL)