TY - GEN
T1 - Deep Learning Evolutionary Optimization for Regression of Rotorcraft Vibrational Spectra
AU - Martinez, Daniel
AU - Brewer, Wesley
AU - Behm, Gregory
AU - Strelzoff, Andrew
AU - Wilson, Andrew
AU - Wade, Daniel
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/7/2
Y1 - 2018/7/2
N2 - A method for Deep Neural Network (DNN) hyperparameter search using evolutionary optimization is proposed for nonlinear high-dimensional multivariate regression problems. Deep networks often lead to extensive hyperparameter searches which can become an ambiguous process due to network complexity. Therefore, we propose a user-friendly method that integrates Dakota optimization library, TensorFlow, and Galaxy HPC workflow management tool to deploy massively parallel function evaluations in a Genetic Algorithm (GA). Deep Learning Evolutionary Optimization (DLEO) is the current GA implementation being presented. Compared with random generated and hand-tuned models, DLEO proved to be significantly faster and better searching for optimal architecture hyperparameter configurations. Implementing DLEO allowed us to find models with higher validation accuracies at lower computational costs in less than 72 hours, as compared with weeks of random and manual search. Moreover, DLEO parallel coordinate plots provided valuable insights about network architecture designs and their regression capabilities.
AB - A method for Deep Neural Network (DNN) hyperparameter search using evolutionary optimization is proposed for nonlinear high-dimensional multivariate regression problems. Deep networks often lead to extensive hyperparameter searches which can become an ambiguous process due to network complexity. Therefore, we propose a user-friendly method that integrates Dakota optimization library, TensorFlow, and Galaxy HPC workflow management tool to deploy massively parallel function evaluations in a Genetic Algorithm (GA). Deep Learning Evolutionary Optimization (DLEO) is the current GA implementation being presented. Compared with random generated and hand-tuned models, DLEO proved to be significantly faster and better searching for optimal architecture hyperparameter configurations. Implementing DLEO allowed us to find models with higher validation accuracies at lower computational costs in less than 72 hours, as compared with weeks of random and manual search. Moreover, DLEO parallel coordinate plots provided valuable insights about network architecture designs and their regression capabilities.
KW - Genetic algorithms
KW - High performance computing (HPC)
KW - Hyperparameter tuning
KW - Neural architecture search (NAS)
UR - http://www.scopus.com/inward/record.url?scp=85063061793&partnerID=8YFLogxK
U2 - 10.1109/MLHPC.2018.8638645
DO - 10.1109/MLHPC.2018.8638645
M3 - Conference contribution
AN - SCOPUS:85063061793
T3 - Proceedings of MLHPC 2018: Machine Learning in HPC Environments, Held in conjunction with SC 2018: The International Conference for High Performance Computing, Networking, Storage and Analysis
SP - 57
EP - 66
BT - Proceedings of MLHPC 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE/ACM Machine Learning in HPC Environments, MLHPC 2018
Y2 - 12 November 2018
ER -