Abstract
Cooperative driving automation enables connected and automated vehicles (CAVs) to devise cooperative merging control, introducing great potentials to alleviate traffic congestion, reduce energy consumption, and enhance safety for highway on-ramp operations. Although numerous CAV cooperative merging algorithms have been developed to improve energy and traffic performance, the agreement-seeking among CAV users and their local benefits have been understudied. This can lead to rejections of cooperative merging plans and jeopardizing CAV performance, as a cooperation may entail certain CAVs to sacrifice their local benefits to achieve a system optimum. To address this issue, the study first leverages multi-Agent deep reinforcement learning (MADRL) factoring both local reward and regional reward to demonstrate the discrepancies between CAV users' local benefits and system optimum. Next, the existence of a correlated equilibrium is proved to characterize the convergence of MADRL training. This further facilitates the incorporation of incentives (computed based on reward discrepancies) to compensate for CAV users' local benefits and facilitate system-optimal agreements in cooperative merging operations.
| Original language | English |
|---|---|
| Pages (from-to) | 103-108 |
| Number of pages | 6 |
| Journal | IFAC-PapersOnLine |
| Volume | 59 |
| Issue number | 3 |
| DOIs | |
| State | Published - May 1 2025 |
| Event | 12th IFAC Symposium on Intelligent Autonomous Vehicles, IAV 2025 - Phoenix, United States Duration: May 7 2025 → May 9 2025 |
Funding
This work is supported by the US Department of Energy, Vehicle Technologies Office, Energy Efficient Mobility Sys-Tems program. The author sare also grateful for Nathan Goulet’s suggestion on improving the manuscript
Keywords
- automated vehicle
- Connected
- cooperative merging control
- incentives
- intelligent transportation systems
- multi-Agent deep reinforcement learning