A distributed kernel summation framework for general-dimension machine learning

Dongryeol Lee, Piyush Sao, Richard Vuduc, Alexander G. Gray

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

Kernel summations are a ubiquitous key computational bottleneck in many data analysis methods. In this paper, we attempt to marry, for the first time, the best relevant techniques in parallel computing, where kernel summations are in low dimensions, with the best general-dimension algorithms from the machine learning literature. We provide the first distributed implementation of kernel summation framework that can utilize: (i) various types of deterministic and probabilistic approximations that may be suitable for low and high-dimensional problems with a large number of data points; (ii) any multidimensional binary tree using both distributed memory and shared memory parallelism; and (iii) a dynamic load balancing scheme to adjust work imbalances during the computation. Our hybrid message passing interface (MPI)/OpenMP codebase has wide applicability in providing a general framework to accelerate the computation of many popular machine learning methods. Our experiments show scalability results for kernel density estimation on a synthetic ten-dimensional dataset containing over one billion points and a subset of the Sloan Digital Sky Survey Data up to 6144 cores.

Original languageEnglish
Pages (from-to)1-13
Number of pages13
JournalStatistical Analysis and Data Mining
Volume7
Issue number1
DOIs
StatePublished - Feb 2014
Externally publishedYes

Keywords

  • CUDA
  • GPGPU
  • Kernel methods
  • Nonparametric methods
  • Parallel machine learning
  • Parallel multidimensional trees

Fingerprint

Dive into the research topics of 'A distributed kernel summation framework for general-dimension machine learning'. Together they form a unique fingerprint.

Cite this