TY - GEN
T1 - Neuromorphic architecture optimization for task-specific dynamic learning
AU - Madireddy, Sandeep
AU - Yanguas-Gil, Angel
AU - Balaprakash, Prasanna
N1 - Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/7/23
Y1 - 2019/7/23
N2 - The ability to learn and adapt in real time is a central feature of biological systems. Neuromorphic architectures demonstrating such versatility can greatly enhance our ability to efficiently process information at the edge. A key challenge, however, is to understand which learning rules are best suited for specific tasks and how the relevant hyperparameters can be fine-tuned. In this work, we introduce a conceptual framework in which the learning process is integrated into the network itself. This allows us to cast meta-learning as a mathematical optimization problem. We employ DeepHyper, a scalable, asynchronous model-based search, to simultaneously optimize the choice of meta-learning rules and their hyperparameters. We demonstrate our approach with two different datasets, MNIST and FashionMNIST, using a network architecture inspired by the learning center of the insect brain. Our results show that optimal learning rules can be dataset-dependent even within similar tasks. This dependency demonstrates the importance of introducing versatility and flexibility in the learning algorithms. It also illuminates experimental findings in insect neuroscience that have shown a heterogeneity of learning rules within the insect mushroom body.
AB - The ability to learn and adapt in real time is a central feature of biological systems. Neuromorphic architectures demonstrating such versatility can greatly enhance our ability to efficiently process information at the edge. A key challenge, however, is to understand which learning rules are best suited for specific tasks and how the relevant hyperparameters can be fine-tuned. In this work, we introduce a conceptual framework in which the learning process is integrated into the network itself. This allows us to cast meta-learning as a mathematical optimization problem. We employ DeepHyper, a scalable, asynchronous model-based search, to simultaneously optimize the choice of meta-learning rules and their hyperparameters. We demonstrate our approach with two different datasets, MNIST and FashionMNIST, using a network architecture inspired by the learning center of the insect brain. Our results show that optimal learning rules can be dataset-dependent even within similar tasks. This dependency demonstrates the importance of introducing versatility and flexibility in the learning algorithms. It also illuminates experimental findings in insect neuroscience that have shown a heterogeneity of learning rules within the insect mushroom body.
KW - Dynamic Learning
KW - Edge Processing
KW - Meta-Learning
KW - Neuromorphic Architecture
KW - Optimization
UR - http://www.scopus.com/inward/record.url?scp=85073258673&partnerID=8YFLogxK
U2 - 10.1145/3354265.3354270
DO - 10.1145/3354265.3354270
M3 - Conference contribution
AN - SCOPUS:85073258673
T3 - ACM International Conference Proceeding Series
BT - ICONS 2019 - Proceedings of International Conference on Neuromorphic Systems
PB - Association for Computing Machinery
T2 - 2019 International Conference on Neuromorphic Systems, ICONS 2019
Y2 - 23 July 2019 through 25 July 2019
ER -