A study of complex deep learning networks on high performance, neuromorphic, and quantum computers

Thomas E. Potok, Catherine D. Schuman, Steven R. Young, Robert M. Patton, Federico Spedalieri, Jeremy Liu, Ke Thia Yao, Garrett Rose, Gangotree Chakma

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

18 Scopus citations

Abstract

Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.

Original languageEnglish
Title of host publicationProceedings of MLHPC 2016
Subtitle of host publicationMachine Learning in HPC Environments - Held in conjunction with SC 2016: The International Conference for High Performance Computing, Networking, Storage and Analysis
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages47-55
Number of pages9
ISBN (Electronic)9781509038824
DOIs
StatePublished - Jan 27 2017
Event2016 Machine Learning in HPC Environments, MLHPC 2016 - Salt Lake City, United States
Duration: Nov 14 2016 → …

Publication series

NameProceedings of MLHPC 2016: Machine Learning in HPC Environments - Held in conjunction with SC 2016: The International Conference for High Performance Computing, Networking, Storage and Analysis

Conference

Conference2016 Machine Learning in HPC Environments, MLHPC 2016
Country/TerritoryUnited States
CitySalt Lake City
Period11/14/16 → …

Funding

This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under contract number DE-AC05-00OR22725.

FundersFunder number
U.S. Department of Energy
Office of ScienceDE-AC05-00OR22725
Advanced Scientific Computing Research

    Fingerprint

    Dive into the research topics of 'A study of complex deep learning networks on high performance, neuromorphic, and quantum computers'. Together they form a unique fingerprint.

    Cite this