Benchmarking the performance of neuromorphic and spiking neural network simulators

Shruti R. Kulkarni, Maryam Parsa, J. Parker Mitchell, Catherine D. Schuman

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

Software simulators play a critical role in the development of new algorithms and system architectures in any field of engineering. Neuromorphic computing, which has shown potential in building brain-inspired energy-efficient hardware, suffers a slow-down in the development cycle due to a lack of flexible and easy-to-use simulators of either neuromorphic hardware itself or of spiking neural networks (SNNs), the type of neural network computation executed on most neuromorphic systems. While there are several openly available neuromorphic or SNN simulation packages developed by a variety of research groups, they have mostly targeted computational neuroscience simulations, and only a few have targeted small-scale machine learning tasks with SNNs. Evaluations or comparisons of these simulators have often targeted computational neuroscience-style workloads. In this work, we seek to evaluate the performance of several publicly available SNN simulators with respect to non-computational neuroscience workloads, in terms of speed, flexibility, and scalability. We evaluate the performance of the NEST, Brian2, Brian2GeNN, BindsNET and Nengo packages under a common front-end neuromorphic framework. Our evaluation tasks include a variety of different network architectures and workload types to mimic the computation common in different algorithms, including feed-forward network inference, genetic algorithms, and reservoir computing. We also study the scalability of each of these simulators when running on different computing hardware, from single core CPU workstations to multi-node supercomputers. Our results show that the BindsNET simulator has the best speed and scalability for most of the SNN workloads (sparse, dense, and layered SNN architectures) on a single core CPU. However, when comparing the simulators leveraging the GPU capabilities, Brian2GeNN outperforms the others for these workloads in terms of scalability. NEST performs the best for small sparse networks and is also the most flexible simulator in terms of reconfiguration capability NEST shows a speedup of at least 2× compared to the other packages when running evolutionary algorithms for SNNs. The multi-node and multi-thread capabilities of NEST show at least 2× speedup compared to the rest of the simulators (single core CPU or GPU based simulators) for large and sparse networks. We conclude our work by providing a set of recommendations on the suitability of employing these simulators for different tasks and scales of operations. We also present the characteristics for a future generic ideal SNN simulator for different neuromorphic computing workloads.

Original languageEnglish
Pages (from-to)145-160
Number of pages16
JournalNeurocomputing
Volume447
DOIs
StatePublished - Aug 4 2021

Funding

This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under contract number DE-AC05-00OR22725. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05- 00OR22725. This research used resources of the Compute and Data Environment for Science (CADES) at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. We would like to thank Chris Layton for his support in our utilization of CADES Cloud. We would also like to thank Bill Kay for his graph algorithms input. Notice: This manuscript has been authored in part by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan). This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under contract number DE-AC05-00OR22725. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05- 00OR22725. This research used resources of the Compute and Data Environment for Science (CADES) at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. We would like to thank Chris Layton for his support in our utilization of CADES Cloud. We would also like to thank Bill Kay for his graph algorithms input. Notice: This manuscript has been authored in part by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).

FundersFunder number
CADES
DOE Public Access Plan
Data Environment for Science
United States Government
U.S. Department of Energy
Office of Science
Advanced Scientific Computing ResearchDE-AC05-00OR22725

    Keywords

    • Benchmarking
    • High Performance Computing (HPC) systems
    • Neuromorphic computers
    • Neuromorphic computing workloads
    • Neuromorphic simulators
    • Scalable systems
    • Spiking neural networks

    Fingerprint

    Dive into the research topics of 'Benchmarking the performance of neuromorphic and spiking neural network simulators'. Together they form a unique fingerprint.

    Cite this