Multi-fidelity reinforcement learning with control variates

Research output: Contribution to journalArticlepeer-review

Abstract

In many computational science and engineering applications, the output of a system of interest corresponding to a given input can be queried at different levels of fidelity with different costs. Typically, low-fidelity data is cheap and abundant, while high-fidelity data is expensive and scarce. In this work we study the reinforcement learning (RL) problem in the presence of multiple environments with different levels of fidelity for a given control task. We focus on improving the RL agent's performance with multifidelity data. Specifically, a multifidelity estimator that exploits the cross-correlations between the low- and high-fidelity returns is proposed to reduce the variance in the estimation of the state–action value function. The proposed estimator, which is based on the method of control variates, is used to design a multifidelity Monte Carlo RL (MFMCRL) algorithm that improves the learning of the agent in the high-fidelity environment. The impacts of variance reduction on policy evaluation and policy improvement are theoretically analyzed by using probability bounds. Our theoretical analysis and numerical experiments demonstrate that for a finite budget of high-fidelity data samples, our proposed MFMCRL agent attains superior performance compared with that of a standard RL agent that uses only the high-fidelity environment data for learning the optimal policy.

Original languageEnglish
Article number127963
JournalNeurocomputing
Volume597
DOIs
StatePublished - Sep 7 2024

Keywords

  • Multi-fidelity estimation
  • Reinforcement learning

Fingerprint

Dive into the research topics of 'Multi-fidelity reinforcement learning with control variates'. Together they form a unique fingerprint.

Cite this