TY - GEN
T1 - Accurate and Accelerated Neuromorphic Network Design Leveraging A Bayesian Hyperparameter Pareto Optimization Approach
AU - Parsa, Maryam
AU - Schuman, Catherine
AU - Rathi, Nitin
AU - Ziabari, Amir
AU - Rose, Derek
AU - Mitchell, J. Parker
AU - Johnston, J. Travis
AU - Kay, Bill
AU - Young, Steven
AU - Roy, Kaushik
N1 - Publisher Copyright:
© 2021 ACM.
PY - 2021/7/27
Y1 - 2021/7/27
N2 - Neuromorphic systems allow for extremely efficient hardware implementations for neural networks (NNs). In recent years, several algorithms have been presented to train spiking NNs (SNNs) for neuromorphic hardware. However, SNNs often provide lower accuracy than their artificial NNs (ANNs) counterparts or require computationally expensive and slow training/inference methods. To close this gap, designers typically rely on reconfiguring SNNs through adjustments in the neuron/synapse model or training algorithm itself. Nevertheless, these steps incur significant design time, while still lacking the desired improvement in terms of training/inference times (latency). Designing SNNs that can mimic the accuracy of ANNs with reasonable training times is an exigent challenge in neuromorphic computing. In this work, we present an alternative approach that looks at such designs as an optimization problem rather than algorithm or architecture redesign. We develop a versatile multiobjective hyperparameter optimization (HPO) for automatically tuning HPs of two state-of-The-Art SNN training algorithms, SLAYER and HYBRID. We emphasize that, to the best of our knowledge, this is the first work trying to improve SNNs' computational efficiency, accuracy, and training time using an efficient HPO. We demonstrate significant performance improvements for SNNs on several datasets without the need to redesign or invent new training algorithms/architectures. Our approach results in more accurate networks with lower latency and, in turn, higher energy efficiency than previous implementations. In particular, we demonstrate improvement in accuracy and more than 5 × reduction in the training/inference time for the SLAYER algorithm on the DVS Gesture dataset. In the case of HYBRID, we demonstrate 30% reduction in timesteps while surpassing the accuracy of the state-of-The-Art networks on CIFAR10. Further, our analysis suggests that even a seemingly minor change in HPs could change the accuracy by 5-6 ×.
AB - Neuromorphic systems allow for extremely efficient hardware implementations for neural networks (NNs). In recent years, several algorithms have been presented to train spiking NNs (SNNs) for neuromorphic hardware. However, SNNs often provide lower accuracy than their artificial NNs (ANNs) counterparts or require computationally expensive and slow training/inference methods. To close this gap, designers typically rely on reconfiguring SNNs through adjustments in the neuron/synapse model or training algorithm itself. Nevertheless, these steps incur significant design time, while still lacking the desired improvement in terms of training/inference times (latency). Designing SNNs that can mimic the accuracy of ANNs with reasonable training times is an exigent challenge in neuromorphic computing. In this work, we present an alternative approach that looks at such designs as an optimization problem rather than algorithm or architecture redesign. We develop a versatile multiobjective hyperparameter optimization (HPO) for automatically tuning HPs of two state-of-The-Art SNN training algorithms, SLAYER and HYBRID. We emphasize that, to the best of our knowledge, this is the first work trying to improve SNNs' computational efficiency, accuracy, and training time using an efficient HPO. We demonstrate significant performance improvements for SNNs on several datasets without the need to redesign or invent new training algorithms/architectures. Our approach results in more accurate networks with lower latency and, in turn, higher energy efficiency than previous implementations. In particular, we demonstrate improvement in accuracy and more than 5 × reduction in the training/inference time for the SLAYER algorithm on the DVS Gesture dataset. In the case of HYBRID, we demonstrate 30% reduction in timesteps while surpassing the accuracy of the state-of-The-Art networks on CIFAR10. Further, our analysis suggests that even a seemingly minor change in HPs could change the accuracy by 5-6 ×.
KW - Bayesian optimization
KW - Neuromorphic computing
KW - hyperparameter optimization
KW - multiobjective optimization
KW - spiking neural networks
UR - http://www.scopus.com/inward/record.url?scp=85117950189&partnerID=8YFLogxK
U2 - 10.1145/3477145.3477160
DO - 10.1145/3477145.3477160
M3 - Conference contribution
AN - SCOPUS:85117950189
T3 - ACM International Conference Proceeding Series
BT - ICONS 2021 - Proceedings of International Conference on Neuromorphic Systems 2021
PB - Association for Computing Machinery
T2 - 2021 International Conference on Neuromorphic Systems, ICONS 2021
Y2 - 27 July 2021 through 29 July 2021
ER -