TY - GEN
T1 - Training Spiking Neural Networks Using Combined Learning Approaches
AU - Elbrecht, Daniel
AU - Parsa, Maryam
AU - Kulkarni, Shruti R.
AU - Mitchell, J. Parker
AU - Schuman, Catherine D.
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/12/1
Y1 - 2020/12/1
N2 - Spiking neural networks (SNNs), the class of neural networks used in neuromorphic computing, are difficult to train using traditional back-propagation techniques. Spike timingdependent plasticity (STDP) is a biologically inspired learning mechanism that can be used to train SNNs. Evolutionary algorithms have also been demonstrated as a method for training SNNs. In this work, we explore the relationship between these two training methodologies. We evaluate STDP and evolutionary optimization as standalone methods for training networks, and also evaluate a combined approach where STDP weight updates are applied within an evolutionary algorithm. We also apply Bayesian hyperparameter optimization as a meta learner for each of the algorithms. We find that STDP by itself is not an ideal learning rule for randomly connected networks, while the inclusion of STDP within an evolutionary algorithm leads to similar performance, with a few interesting differences. This study suggests future work in understanding the relationship between network topology and learning rules.
AB - Spiking neural networks (SNNs), the class of neural networks used in neuromorphic computing, are difficult to train using traditional back-propagation techniques. Spike timingdependent plasticity (STDP) is a biologically inspired learning mechanism that can be used to train SNNs. Evolutionary algorithms have also been demonstrated as a method for training SNNs. In this work, we explore the relationship between these two training methodologies. We evaluate STDP and evolutionary optimization as standalone methods for training networks, and also evaluate a combined approach where STDP weight updates are applied within an evolutionary algorithm. We also apply Bayesian hyperparameter optimization as a meta learner for each of the algorithms. We find that STDP by itself is not an ideal learning rule for randomly connected networks, while the inclusion of STDP within an evolutionary algorithm leads to similar performance, with a few interesting differences. This study suggests future work in understanding the relationship between network topology and learning rules.
KW - Bayesian optimization
KW - evolutionary algorithms
KW - spike-timing dependent plasticity
KW - spiking neural networks
UR - https://www.scopus.com/pages/publications/85099697085
U2 - 10.1109/SSCI47803.2020.9308443
DO - 10.1109/SSCI47803.2020.9308443
M3 - Conference contribution
AN - SCOPUS:85099697085
T3 - 2020 IEEE Symposium Series on Computational Intelligence, SSCI 2020
SP - 1995
EP - 2001
BT - 2020 IEEE Symposium Series on Computational Intelligence, SSCI 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE Symposium Series on Computational Intelligence, SSCI 2020
Y2 - 1 December 2020 through 4 December 2020
ER -