A stochastic maximum principle approach for reinforcement learning with parameterized environment

Richard Archibald, Feng Bao, Jiongmin Yong

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

In this work, we introduce a stochastic maximum principle (SMP) approach for solving the reinforcement learning problem with the assumption that the unknowns in the environment can be parameterized based on physics knowledge. For the development of numerical algorithms, we apply an effective online parameter estimation method as our exploration technique to estimate the environment parameter during the training procedure, and the exploitation for the optimal policy is achieved by an efficient backward action learning method for policy improvement under the SMP framework. Numerical experiments are presented to demonstrate that the SMP approach for reinforcement learning can produce reliable control policy, and the gradient descent type optimization in the SMP solver requires less training episodes compared with the standard dynamic programming principle based methods.

Original languageEnglish
Article number112238
JournalJournal of Computational Physics
Volume488
DOIs
StatePublished - Sep 1 2023

Funding

This work is partially supported by U.S. Department of Energy through FASTMath Institute and Office of Science, Advanced Scientific Computing Research program under the grant DE-SC0022297 . The second author (FB) would also like to acknowledge the support from U.S. National Science Foundation through project DMS-2142672 . The third author (JY) would like to acknowledge the support from U.S. National Science Foundation through project DMS-2305475 .

Keywords

  • Optimal control
  • Optimal filtering
  • Parameter estimation
  • Reinforcement learning
  • Stochastic maximum principle

Fingerprint

Dive into the research topics of 'A stochastic maximum principle approach for reinforcement learning with parameterized environment'. Together they form a unique fingerprint.

Cite this