A Gradient-Aware Search Algorithm for Constrained Markov Decision Processes

Sami Khairy, Prasanna Balaprakash, Lin X. Cai

Research output: Contribution to journalArticlepeer-review

Abstract

The canonical solution methodology for finite constrained Markov decision processes (CMDPs), where the objective is to maximize the expected infinite-horizon discounted rewards subject to the expected infinite-horizon discounted costs’ constraints, is based on convex linear programming (LP). In this brief, we first prove that the optimization objective in the dual linear program of a finite CMDP is a piecewise linear convex (PWLC) function with respect to the Lagrange penalty multipliers. Next, we propose a novel, provably optimal, two-level gradient-aware search (GAS) algorithm which exploits the PWLC structure to find the optimal state-value function and Lagrange penalty multipliers of a finite CMDP. The proposed algorithm is applied in two stochastic control problems with constraints for performance comparison with binary search (BS), Lagrangian primal–dual optimization (PDO), and LP. Compared with the benchmark algorithms, it is shown that the proposed GAS algorithm converges to the optimal solution quickly without any hyperparameter tuning. In addition, the convergence speed of the proposed algorithm is not sensitive to the initialization of the Lagrange multipliers.

Original languageEnglish
Pages (from-to)1-8
Number of pages8
JournalIEEE Transactions on Neural Networks and Learning Systems
DOIs
StateAccepted/In press - 2023

Keywords

  • Constrained Markov decision process (CMDP)
  • Convergence
  • Costs
  • Dynamic programming
  • Genetic algorithms
  • Lagrangian primal–dual optimization (PDO)
  • Markov processes
  • Optimization
  • Search problems
  • gradient aware search (GAS)
  • piecewise linear convex (PWLC)

Fingerprint

Dive into the research topics of 'A Gradient-Aware Search Algorithm for Constrained Markov Decision Processes'. Together they form a unique fingerprint.

Cite this