TY - GEN
T1 - MagmaDNN
T2 - 34th International Conference on High Performance Computing, ISC High Performance 2019
AU - Nichols, Daniel
AU - Tomov, Nathalie Sofia
AU - Betancourt, Frank
AU - Tomov, Stanimire
AU - Wong, Kwai
AU - Dongarra, Jack
N1 - Publisher Copyright:
© 2019, Springer Nature Switzerland AG.
PY - 2019
Y1 - 2019
N2 - In this paper, we present work towards the development of a new data analytics and machine learning (ML) framework, called MagmaDNN. Our main goal is to provide scalable, high-performance data analytics and ML solutions for scientific applications running on current and upcoming heterogeneous many-core GPU-accelerated architectures. To this end, since many of the functionalities needed are based on standard linear algebra (LA) routines, we designed MagmaDNN to derive its performance power from the MAGMA library. The close integration provides the fundamental (scalable high-performance) LA routines available in MAGMA as a backend to MagmaDNN. We present some design issues for performance and scalability that are specific to ML using Deep Neural Networks (DNN), as well as the MagmaDNN designs towards overcoming them. In particular, MagmaDNN uses well established HPC techniques from the area of dense LA, including task-based parallelization, DAG representations, scheduling, mixed-precision algorithms, asynchronous solvers, and autotuned hyperparameter optimization. We illustrate these techniques and their incorporation and use to outperform other frameworks, currently available.
AB - In this paper, we present work towards the development of a new data analytics and machine learning (ML) framework, called MagmaDNN. Our main goal is to provide scalable, high-performance data analytics and ML solutions for scientific applications running on current and upcoming heterogeneous many-core GPU-accelerated architectures. To this end, since many of the functionalities needed are based on standard linear algebra (LA) routines, we designed MagmaDNN to derive its performance power from the MAGMA library. The close integration provides the fundamental (scalable high-performance) LA routines available in MAGMA as a backend to MagmaDNN. We present some design issues for performance and scalability that are specific to ML using Deep Neural Networks (DNN), as well as the MagmaDNN designs towards overcoming them. In particular, MagmaDNN uses well established HPC techniques from the area of dense LA, including task-based parallelization, DAG representations, scheduling, mixed-precision algorithms, asynchronous solvers, and autotuned hyperparameter optimization. We illustrate these techniques and their incorporation and use to outperform other frameworks, currently available.
KW - Data-driven scientific computing
KW - High-performance DNN
KW - Machine learning
UR - http://www.scopus.com/inward/record.url?scp=85076839916&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-34356-9_37
DO - 10.1007/978-3-030-34356-9_37
M3 - Conference contribution
AN - SCOPUS:85076839916
SN - 9783030343552
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 490
EP - 503
BT - High Performance Computing - ISC High Performance 2019 International Workshops, Revised Selected Papers
A2 - Weiland, Michèle
A2 - Juckeland, Guido
A2 - Alam, Sadaf
A2 - Jagode, Heike
PB - Springer
Y2 - 16 June 2019 through 20 June 2019
ER -