Abstract
We propose algorithms and techniques to accelerate training of deep neural networks for action recognition on a cluster of GPUs. The convergence analysis of our algorithm shows it is possible to reduce communication cost and at the same time minimize the number of iterations needed for convergence. We customize the Adam optimizer for our distributed algorithm to improve efficiency. In addition, we employ transfer-learning to further reduce training time while improving validation accuracy. For the UCF101 and HMDB51 datasets, the validation accuracies achieved are 93.1% and 67.9% respectively. With an additional end-to-end trained temporal stream, the validation accuracies achieved for UCF101 and HMDB51 are 93.47% and 81.24% respectively. As far as we know, these are the highest accuracies achieved with the two-stream approach using ResNet that does not involve computationally expensive 3D convolutions or pretraining on much larger datasets.
Original language | English |
---|---|
Pages (from-to) | 153-165 |
Number of pages | 13 |
Journal | Journal of Parallel and Distributed Computing |
Volume | 134 |
DOIs | |
State | Published - Dec 2019 |
Externally published | Yes |
Funding
Part of this work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was supported by the LLNL-LDRD Program under Project No. 17-SI-003 .
Funders | Funder number |
---|---|
Lawrence Livermore National Laboratory | 17-SI-003 |
U.S. Department of Energy | |
Lawrence Livermore National Laboratory | DE-AC52-07NA27344 |
Keywords
- Distributed training
- GPU
- Machine learning
- Transfer learning
- Video analytics