Data-Efficient Reinforcement Learning for Complex Nonlinear Systems

Vrushabh S. Donge, Bosen Lian, Frank L. Lewis, Ali Davoudi

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

This article proposes a data-efficient model-free reinforcement learning (RL) algorithm using Koopman operators for complex nonlinear systems. A high-dimensional data-driven optimal control of the nonlinear system is developed by lifting it into the linear system model. We use a data-driven model-based RL framework to derive an off-policy Bellman equation. Building upon this equation, we deduce the data-efficient RL algorithm, which does not need a Koopman-built linear system model. This algorithm preserves dynamic information while reducing the required data for optimal control learning. Numerical and theoretical analyses of the Koopman eigenfunctions for dataset truncation are discussed in the proposed model-free data-efficient RL algorithm. We validate our framework on the excitation control of the power system.

Original languageEnglish
Pages (from-to)1391-1402
Number of pages12
JournalIEEE Transactions on Cybernetics
Volume54
Issue number3
DOIs
StatePublished - Mar 1 2024
Externally publishedYes

Keywords

  • Data-driven control
  • Koopman theory
  • optimal control
  • reinforcement learning (RL)

Fingerprint

Dive into the research topics of 'Data-Efficient Reinforcement Learning for Complex Nonlinear Systems'. Together they form a unique fingerprint.

Cite this