TY - JOUR
T1 - Deep Reinforcement Learning-Based Model-Free On-Line Dynamic Multi-Microgrid Formation to Enhance Resilience
AU - Zhao, Jin
AU - Li, Fangxing
AU - Mukherjee, Srijib
AU - Sticht, Christopher
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022/7/1
Y1 - 2022/7/1
N2 - Multi-microgrid formation (MMGF) is a promising solution for enhancing power system resilience. This paper proposes a new deep reinforcement learning (RL) based model-free on-line dynamic MMGF scheme. The dynamic MMGF problem is formulated as a Markov decision process, and a complete deep RL framework is specially designed for the topology-transformable micro-grids. In order to reduce the large action space caused by flexible switch operations, a topology transformation method is proposed and an action-decoupling Q-value is applied. Then, a convolutional neural network (CNN) based multi-buffer double deep Q-network (CM-DDQN) is developed to further improve the learning ability of the original DQN method. The proposed deep RL method provides real-time computing to support the on-line dynamic MMGF scheme, and the scheme handles a long-term resilience enhancement problem using an adaptive on-line MMGF to defend changeable conditions. The effectiveness of the proposed method is validated using a 7-bus system and the IEEE 123-bus system. The results show strong learning ability, timely response for varying system conditions and convincing resilience enhancement.
AB - Multi-microgrid formation (MMGF) is a promising solution for enhancing power system resilience. This paper proposes a new deep reinforcement learning (RL) based model-free on-line dynamic MMGF scheme. The dynamic MMGF problem is formulated as a Markov decision process, and a complete deep RL framework is specially designed for the topology-transformable micro-grids. In order to reduce the large action space caused by flexible switch operations, a topology transformation method is proposed and an action-decoupling Q-value is applied. Then, a convolutional neural network (CNN) based multi-buffer double deep Q-network (CM-DDQN) is developed to further improve the learning ability of the original DQN method. The proposed deep RL method provides real-time computing to support the on-line dynamic MMGF scheme, and the scheme handles a long-term resilience enhancement problem using an adaptive on-line MMGF to defend changeable conditions. The effectiveness of the proposed method is validated using a 7-bus system and the IEEE 123-bus system. The results show strong learning ability, timely response for varying system conditions and convincing resilience enhancement.
KW - Convolutional neural network (CNN)
KW - Deep reinforcement learning (DRL)
KW - distributed generation (DG)
KW - extreme weather
KW - microgrids (MGs)
KW - multi-microgrid formation (MMGF)
KW - power system resilience
UR - http://www.scopus.com/inward/record.url?scp=85126696969&partnerID=8YFLogxK
U2 - 10.1109/TSG.2022.3160387
DO - 10.1109/TSG.2022.3160387
M3 - Article
AN - SCOPUS:85126696969
SN - 1949-3053
VL - 13
SP - 2557
EP - 2567
JO - IEEE Transactions on Smart Grid
JF - IEEE Transactions on Smart Grid
IS - 4
ER -