Minimizing energy consumption from connected signalized intersections by reinforcement learning

S. M.A. Bin Al Islam, H. M.Abdul Aziz, Hong Wang, Stanley E. Young

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

11 Scopus citations

Abstract

Explicit energy minimization objectives are often discouraged in signal optimization algorithms due to its negative impact on mobility performance. One potential direction to solve this problem is to provide a balanced objective function to achieve desired mobility with minimized energy consumption. This research developed a reinforcement learning (RL) based control with reward functions considering energy and mobility in a joint manner-a penalty function is introduced for number of stops. Further, we proposed a clustering-based technique to make the state-space finite which is critical for a tractable implementation of the RL algorithm. We implemented the algorithm in a calibrated NG-SIM network within a traffic micro-simulator-PTV VISSIM. With sole focus on energy, we report 47% reduction in energy consumption when compared with existing signal control schemes, however causing a 65.6% increase in system travel time. In contrast, the control strategy focusing on energy minimization with penalty for stops yields 6.7% reduction in energy consumption with 27% increase in system travel time. The developed RL algorithm with a flexible penalty function in the reward will achieve desired energy goals for a network of signalized intersections without compromising on the mobility performance. Disclaimer: This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).

Original languageEnglish
Title of host publication2018 IEEE Intelligent Transportation Systems Conference, ITSC 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1870-1875
Number of pages6
ISBN (Electronic)9781728103235
DOIs
StatePublished - Dec 7 2018
Externally publishedYes
Event21st IEEE International Conference on Intelligent Transportation Systems, ITSC 2018 - Maui, United States
Duration: Nov 4 2018Nov 7 2018

Publication series

NameIEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC
Volume2018-November

Conference

Conference21st IEEE International Conference on Intelligent Transportation Systems, ITSC 2018
Country/TerritoryUnited States
CityMaui
Period11/4/1811/7/18

Funding

Disclaimer: This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan(http://energy.gov/downloads/doe-public-access-plan Index Terms—Reinforcement learning, fuel consumption, energy minimization, connected vehicles, traffic state observability.

FundersFunder number
U.S. Department of Energy

    Keywords

    • Reinforcement learning
    • connected vehicles
    • energy minimization
    • fuel consumption
    • traffic state observability

    Fingerprint

    Dive into the research topics of 'Minimizing energy consumption from connected signalized intersections by reinforcement learning'. Together they form a unique fingerprint.

    Cite this