Abstract
Intelligent Heating, Ventilation, and Air Conditioning (HVAC) control using deep reinforcement learning (DRL) has recently gained a lot of attention due to its ability to optimally control the complex behavior of the HVAC system. However, more exploration is needed on understanding the adaptability challenges that the DRL agent could face during the deployment phase. Using online learning for such applications is not realistic due to the long learning period and likely poor comfort control during the learning process. Alternatively, DRL can be pre-trained using a building model prior to deployment. However, developing an accurate building model for every house and deploying a pre-trained DRL model for HVAC control would not be cost-effective. In this study, we focus on evaluating the ability of DRL-based HVAC control to provide cost savings when pre-trained on one building model and deployed on different house models with varying user comforts. We observed around 30% of cost reduction by pre-trained model over baseline when validated in a simulation environment and achieved up to 21% cost reduction when deployed in the real house. This finding provides experimental evidence that the pre-trained DRL has the potential to adapt to different house environments and comfort settings.
Original language | English |
---|---|
Article number | 7727 |
Journal | Sustainability (Switzerland) |
Volume | 12 |
Issue number | 18 |
DOIs | |
State | Published - Sep 2020 |
Bibliographical note
Publisher Copyright:© 2020 by the authors.
Keywords
- Adaptability
- Building energy
- Building simulation
- Deep reinforcement learning
- Demand response
- Optimal HVAC control
- Smart grid