Multiagent Graphical Games with Inverse Reinforcement Learning

Vrushabh S. Donge, Bosen Lian, Frank L. Lewis, Ali Davoudi

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

This work investigates inverse reinforcement learning (RL) for multiagent systems (MAS) defined by Graphical Apprentice Games. These games are solved by a learner MAS finding the unknown cost functions of an expert MAS from its demonstrated behavior. We begin by developing a model-based inverse RL algorithm including two update loops:1) an inner-loop optimal control update and 2) an outer-loop inverse optimal control (IOC) update. We then introduce a model-free inverse RL algorithm that uses online behaviors of the expert and learner MAS without knowing their dynamics. The optimal control and IOC are solved as subproblems in both proposed inverse RL algorithms. The reward functions that the learner MAS finds are proven to be both stabilizing and nonunique. Simulated case studies validate the effectiveness of the proposed inverse RL algorithms.

Original languageEnglish
Pages (from-to)841-852
Number of pages12
JournalIEEE Transactions on Control of Network Systems
Volume10
Issue number2
DOIs
StatePublished - Jun 1 2023
Externally publishedYes

Keywords

  • Graphical games
  • inverse optimal control (IOC)
  • inverse reinforcement learning (RL)
  • multiagent system (MAS)
  • optimal control
  • synchronization

Fingerprint

Dive into the research topics of 'Multiagent Graphical Games with Inverse Reinforcement Learning'. Together they form a unique fingerprint.

Cite this