Abstract
Convolutional neural networks (CNNs) have recently attracted considerable attention due to their outstanding accuracy in applications, such as image recognition and natural language processing. While one advantage of the CNNs over other types of neural networks is their reduced computational cost, faster execution is still desired for both training and inference. Since convolution operations pose most of the execution time, multiple algorithms were and are being developed with the aim of accelerating this type of operations. However, due to the wide range of convolution parameter configurations used in the CNNs and the possible data type representations, it is not straightforward to assess in advance which of the available algorithms will be the best performing in each particular case. In this paper, we present a performance evaluation of the convolution algorithms provided by the cuDNN, the library used by most deep learning frameworks for their GPU operations. In our analysis, we leverage the convolution parameter configurations from widely used the CNNs and discuss which algorithms are better suited depending on the convolution parameters for both 32 and 16-bit floating-point (FP) data representations. Our results show that the filter size and the number of inputs are the most significant parameters when selecting a GPU convolution algorithm for 32-bit FP data. For 16-bit FP, leveraging specialized arithmetic units (NVIDIA Tensor Cores) is key to obtain the best performance.
Original language | English |
---|---|
Article number | 8721631 |
Pages (from-to) | 70461-70473 |
Number of pages | 13 |
Journal | IEEE Access |
Volume | 7 |
DOIs | |
State | Published - 2019 |
Externally published | Yes |
Funding
This work was supported by the European Union’s Horizon 2020 Research and Innovation Program under the Marie Sklodowska-Curie under Grant 749516, and in part by the Spanish Juan de la Cierva under Grant IJCI-2017-33511.
Funders | Funder number |
---|---|
European Union’s Horizon 2020 Research and Innovation Program | |
Marie Sklodowska-Curie | |
Spanish Juan de la Cierva | IJCI-2017-33511 |
Horizon 2020 Framework Programme | 749516 |
Keywords
- GPU
- Neural network
- convolution
- cuDNN
- deep learning
- volta