GIST: distributed training for large-scale graph convolutional networks

Cameron R. Wolfe, Jingkang Yang, Fangshuo Liao, Arindam Chowdhury, Chen Dun, Artun Bayer, Santiago Segarra, Anastasios Kyrillidis

Research output: Contribution to journalArticlepeer-review

Abstract

The graph convolutional network (GCN) is a go-to solution for machine learning on graphs, but its training is notoriously difficult to scale both in terms of graph size and the number of model parameters. Although some work has explored training on large-scale graphs, we pioneer efficient training of large-scale GCN models with the proposal of a novel, distributed training framework, called GIST. GIST disjointly partitions the parameters of a GCN model into several, smaller sub-GCNs that are trained independently and in parallel. Compatible with all GCN architectures and existing sampling techniques, GIST (i) improves model performance, (ii) scales to training on arbitrarily large graphs, (iii) decreases wall-clock training time, and (iv) enables the training of markedly overparameterized GCN models. Remarkably, with GIST, we train an astonishgly-wide 32–768-dimensional GraphSAGE model, which exceeds the capacity of a single GPU by a factor of 8×, to SOTA performance on the Amazon2M dataset.

Original languageEnglish
Pages (from-to)1363-1415
Number of pages53
JournalJournal of Applied and Computational Topology
Volume8
Issue number5
DOIs
StatePublished - Oct 2024
Externally publishedYes

Funding

This work is supported by NSF FET: Small No. 1907936, NSF MLWiNS CNS No. 2003137 (in collaboration with Intel), NSF CMMI no. 2037545, NSF CAREER award No. 2145629, NSF CIF No. 2008555 and Rice InterDisciplinary Excellence Award (IDEA). Funding for this project is provided by NSF FET: Small No. 1907936, NSF MLWiNS CNS No. 2003137 (in collaboration with Intel), NSF CMMI no. 2037545, NSF CAREER award no. 2145629, NSF CIF No. 2008555, and Rice InterDisciplinary Excellence Award (IDEA).

FundersFunder number
NSF FET1907936
National Science Foundation2008555, 2145629, 2003137, 2037545
National Science Foundation

    Keywords

    • 68T07
    • Distributed training
    • Efficient training
    • Graph neural networks
    • Overparameterization

    Fingerprint

    Dive into the research topics of 'GIST: distributed training for large-scale graph convolutional networks'. Together they form a unique fingerprint.

    Cite this