TY - GEN
T1 - Mitigating Catastrophic Forgetting in Deep Learning in a Streaming Setting Using Historical Summary
AU - Dash, Sajal
AU - Yin, Junqi
AU - Shankar, Mallikarjun
AU - Wang, Feiyi
AU - Feng, Wu Chun
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - Recent advancements in scientific equipment and the adaptation of electronics and the Internet of Things (IoT) in our everyday lives resulted in large and complex data production at a high rate. Making meaningful and timely knowledge discovery at a modest cost from this big data is difficult for computing power and storage limitations. Training deep learning models incrementally in a streaming setting can help us with overcoming these limitations. However, in a well-known phenomenon named catastrophic forgetting, incrementally trained models increasingly perform poorly on the past data. To mitigate catastrophic forgetting in training in a streaming setting, we propose constructing a historical summary over time and use the summary with newly arrived data during incremental training. We propose various data summarization techniques such as random sampling, micro clustering, coreset computation, and Auto Encoders to counteract catastrophic forgetting. We built a pipeline for incremental training with a historical summary for training deep learning models for streaming data. We demonstrate the effectiveness of historical summary in mitigating catastrophic forgetting using three case studies involving three different deep learning applications: an Artificial Neural Network (ANN) for classification task on MNIST dataset, a language model (RNN-LM) on the WikiText2 dataset, and a Convolutional Neural Network (CNN), ResNet50 to classify the ImageNet dataset. Through the training of the models, we observe that catastrophic forgetting is evident in ANN and CNN but not in an RNN. For the first task, our method recovers up to 47.9% lost accuracy due to catastrophic forgetting. For the third task, the historical summary recovers classification accuracy by up to 25%. For the second task, though there is not proof of catastrophic forgetting, the training performance (PPL) improves by up to 26% with historical summary.
AB - Recent advancements in scientific equipment and the adaptation of electronics and the Internet of Things (IoT) in our everyday lives resulted in large and complex data production at a high rate. Making meaningful and timely knowledge discovery at a modest cost from this big data is difficult for computing power and storage limitations. Training deep learning models incrementally in a streaming setting can help us with overcoming these limitations. However, in a well-known phenomenon named catastrophic forgetting, incrementally trained models increasingly perform poorly on the past data. To mitigate catastrophic forgetting in training in a streaming setting, we propose constructing a historical summary over time and use the summary with newly arrived data during incremental training. We propose various data summarization techniques such as random sampling, micro clustering, coreset computation, and Auto Encoders to counteract catastrophic forgetting. We built a pipeline for incremental training with a historical summary for training deep learning models for streaming data. We demonstrate the effectiveness of historical summary in mitigating catastrophic forgetting using three case studies involving three different deep learning applications: an Artificial Neural Network (ANN) for classification task on MNIST dataset, a language model (RNN-LM) on the WikiText2 dataset, and a Convolutional Neural Network (CNN), ResNet50 to classify the ImageNet dataset. Through the training of the models, we observe that catastrophic forgetting is evident in ANN and CNN but not in an RNN. For the first task, our method recovers up to 47.9% lost accuracy due to catastrophic forgetting. For the third task, the historical summary recovers classification accuracy by up to 25%. For the second task, though there is not proof of catastrophic forgetting, the training performance (PPL) improves by up to 26% with historical summary.
KW - Catastrophic Forgetting
KW - Deep Learning
KW - Incremental Learning
KW - Reduction
KW - Streaming
KW - Summary
UR - http://www.scopus.com/inward/record.url?scp=85124517432&partnerID=8YFLogxK
U2 - 10.1109/DRBSD754563.2021.00006
DO - 10.1109/DRBSD754563.2021.00006
M3 - Conference contribution
AN - SCOPUS:85124517432
T3 - Proceedings of DRBSD-7 2021: 7th International Workshop on Data Analysis and Reduction for Big Scientific Data, Held in conjunction with SC 2021: The International Conference for High Performance Computing, Networking, Storage and Analysis
SP - 11
EP - 18
BT - Proceedings of DRBSD-7 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 7th International Workshop on Data Analysis and Reduction for Big Scientific Data, DRBSD-7 2021
Y2 - 14 November 2021
ER -