TY - GEN
T1 - Training deep neural networks with constrained learning parameters
AU - Date, Prasanna
AU - Carothers, Christopher D.
AU - Mitchell, John E.
AU - Hendler, James A.
AU - Magdon-Ismail, Malik
N1 - Publisher Copyright:
© 2020 International Conference on Rebooting Computing (ICRC)
PY - 2020/12
Y1 - 2020/12
N2 - Today's deep learning models are primarily trained on CPUs and GPUs. Although these models tend to have low error, they consume high power and utilize large amount of memory owing to double precision floating point learning parameters. Beyond the Moore's law, a significant portion of deep learning tasks would run on edge computing systems, which will form an indispensable part of the entire computation fabric. Subsequently, training deep learning models for such systems will have to be tailored and adopted to generate models that have the following desirable characteristics: low error, low memory, and low power. We believe that deep neural networks (DNNs), where learning parameters are constrained to have a set of finite discrete values, running on neuromorphic computing systems would be instrumental for intelligent edge computing systems having these desirable characteristics. To this extent, we propose the Combinatorial Neural Network Training Algorithm (CoNNTrA), that leverages a coordinate gradient descent-based approach for training deep learning models with finite discrete learning parameters. Next, we elaborate on the theoretical underpinnings and evaluate the computational complexity of CoNNTrA. As a proof of concept, we use CoNNTrA to train deep learning models with ternary learning parameters on the MNIST, Iris and ImageNet data sets and compare their performance to the same models trained using Backpropagation. We use following performance metrics for the comparison: (i) Training error; (ii) Validation error; (iii) Memory usage; and (iv) Training time. Our results indicate that CoNNTrA models use 32× less memory and have errors at par with the Backpropagation models.
AB - Today's deep learning models are primarily trained on CPUs and GPUs. Although these models tend to have low error, they consume high power and utilize large amount of memory owing to double precision floating point learning parameters. Beyond the Moore's law, a significant portion of deep learning tasks would run on edge computing systems, which will form an indispensable part of the entire computation fabric. Subsequently, training deep learning models for such systems will have to be tailored and adopted to generate models that have the following desirable characteristics: low error, low memory, and low power. We believe that deep neural networks (DNNs), where learning parameters are constrained to have a set of finite discrete values, running on neuromorphic computing systems would be instrumental for intelligent edge computing systems having these desirable characteristics. To this extent, we propose the Combinatorial Neural Network Training Algorithm (CoNNTrA), that leverages a coordinate gradient descent-based approach for training deep learning models with finite discrete learning parameters. Next, we elaborate on the theoretical underpinnings and evaluate the computational complexity of CoNNTrA. As a proof of concept, we use CoNNTrA to train deep learning models with ternary learning parameters on the MNIST, Iris and ImageNet data sets and compare their performance to the same models trained using Backpropagation. We use following performance metrics for the comparison: (i) Training error; (ii) Validation error; (iii) Memory usage; and (iv) Training time. Our results indicate that CoNNTrA models use 32× less memory and have errors at par with the Backpropagation models.
KW - Artificial intelligence
KW - Deep learning
KW - Deep neural networks
KW - Machine learning
KW - Training algorithm
UR - http://www.scopus.com/inward/record.url?scp=85100514604&partnerID=8YFLogxK
U2 - 10.1109/ICRC2020.2020.00018
DO - 10.1109/ICRC2020.2020.00018
M3 - Conference contribution
AN - SCOPUS:85100514604
T3 - Proceedings - 2020 International Conference on Rebooting Computing, ICRC 2020
SP - 107
EP - 115
BT - Proceedings - 2020 International Conference on Rebooting Computing, ICRC 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 International Conference on Rebooting Computing, ICRC 2020
Y2 - 1 December 2020 through 3 December 2020
ER -