Fast Training of Deep Neural Networks for Speech Recognition

  • Guojing Cong
  • , Brian Kingsbury
  • , Chih Chieh Yang
  • , Tianyi Liu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Scopus citations

Abstract

Training large, deep neural network acoustic models for speech recognition on large datasets takes a long time on a single GPU, motivating research on parallel training algorithms. We present an approach for training a bidirectional LSTM acoustic model on the 2000-hour Switchboard corpus. The model we train achieves state-of-the-art word error rate, 7.5% on the Hub5-2000 Switchboard test set and 13.1% on the Callhome test set, and scales to an unprecedented 96 learners while employing only 12 global reductions per epoch of training. As our implementation incurs far fewer reductions than prior work, it does not require aggressively optimized communication primitives to reach state-of-the-art performance in a short amount of time. With 48 NVIDIA V100 GPUs training takes 5 hours; with 96 GPUs, training takes around 3 hours.

Original languageEnglish
Title of host publication2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages6884-6888
Number of pages5
ISBN (Electronic)9781509066315
DOIs
StatePublished - May 2020
Externally publishedYes
Event2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Barcelona, Spain
Duration: May 4 2020May 8 2020

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2020-May
ISSN (Print)1520-6149

Conference

Conference2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020
Country/TerritorySpain
CityBarcelona
Period05/4/2005/8/20

Fingerprint

Dive into the research topics of 'Fast Training of Deep Neural Networks for Speech Recognition'. Together they form a unique fingerprint.

Cite this