Improvement of parallelization efficiency of batch pattern BP training algorithm using Open MPI

Volodymyr Turchenko, Lucio Grandinetti, George Bosilca, Jack J. Dongarra

Research output: Contribution to journalArticlepeer-review

18 Scopus citations

Abstract

The use of tuned collective's module of Open MPI to improve a parallelization efficiency of parallel batch pattern back propagation training algorithm of a multilayer perceptron is considered in this paper. The multilayer perceptron model and the usual sequential batch pattern training algorithm are theoretically described. An algorithmic description of a parallel version of the batch pattern training method is introduced. The obtained parallelization efficiency results using Open MPI tuned collective's module and MPICH2 are compared. Our results show that (i) Open MPI tuned collective's module outperforms MPICH2 implementation both on SMP computer and computational cluster and (ii) different internal algorithms of MPI-Allreduce() collective operation give better results on different scenarios and different parallel systems. Therefore the properties of the communication network and user application should be taken into account when a specific collective algorithm is used.

Original languageEnglish
Pages (from-to)525-533
Number of pages9
JournalProcedia Computer Science
Volume1
Issue number1
DOIs
StatePublished - 2010
Externally publishedYes

Keywords

  • Multilayer perceptron
  • Open MPI
  • Parallelization efficiency
  • Tuned collective's module

Fingerprint

Dive into the research topics of 'Improvement of parallelization efficiency of batch pattern BP training algorithm using Open MPI'. Together they form a unique fingerprint.

Cite this