Deep reinforcement learning for autonomous water heater control

Kadir Amasyali, Jeffrey Munk, Kuldeep Kurte, Teja Kuruganti, Helia Zandi

Research output: Contribution to journalArticlepeer-review

19 Scopus citations

Abstract

Electric water heaters represent 14% of the electricity consumption in residential buildings. An average household in the United States (U.S.) spends about USD 400–600 (0.45 ¢/L–0.68 ¢/L) on water heating every year. In this context, water heaters are often considered as a valuable asset for Demand Response (DR) and building energy management system (BEMS) applications. To this end, this study proposes a model-free deep reinforcement learning (RL) approach that aims to minimize the electricity cost of a water heater under a time-of-use (TOU) electricity pricing policy by only using standard DR commands. In this approach, a set of RL agents, with different look ahead periods, were trained using the deep Q-networks (DQN) algorithm and their performance was tested on an unseen pair of price and hot water usage profiles. The testing results showed that the RL agents can help save electricity cost in the range of 19% to 35% compared to the baseline operation without causing any discomfort to end users. Additionally, the RL agents outperformed rule-based and model predictive control (MPC)-based controllers and achieved comparable performance to optimization-based control.

Original languageEnglish
Article number548
JournalBuildings
Volume11
Issue number11
DOIs
StatePublished - Nov 2021

Funding

This work was funded by the U.S. Department of Energy, Energy Efficiency and Renewable Energy, Building Technology Office under contract number DE-AC05-00OR22725. Acknowledgments: This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a nonexclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accor-dance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan). Acknowledgments: This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a nonexclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan). Funding: This work was funded by the U.S. Department of Energy, Energy Efficiency and Renewable Energy, Building Technology Office under contract number DE-AC05-00OR22725.

FundersFunder number
DOE Public Access Plan
Energy Efficiency and Renewable Energy, Building Technology OfficeDE-AC05-00OR22725
United States Government
U.S. Department of Energy

    Keywords

    • Deep Q-networks
    • Deep learning
    • Demand response
    • Heat pump water heater
    • Machine learning
    • Reinforcement learning
    • Smart grid

    Fingerprint

    Dive into the research topics of 'Deep reinforcement learning for autonomous water heater control'. Together they form a unique fingerprint.

    Cite this