Efficient Distributed Sequence Parallelism for Transformer-based Image Segmentation

Research output: Contribution to journalConference articlepeer-review

3 Scopus citations

Abstract

We introduce an efficient distributed sequence parallel approach for training transformer-based deep learning image segmentation models. The neural network models are comprised of a combination of a Vision Transformer encoder with a convolutional decoder to provide image segmentation mappings. The utility of the distributed sequence parallel approach is especially useful in cases where the tokenized embedding representation of image data are too large to fit into standard computing hardware memory. To demonstrate the performance and characteristics of our models trained in sequence parallel fashion compared to standard models, we evaluate our approach using a 3D MRI brain tumor segmentation dataset. We show that training with a sequence parallel approach can match standard sequential model training in terms of convergence. Furthermore, we show that our sequence parallel approach has the capability to support training of models that would not be possible on standard computing resources.

Original languageEnglish
Pages (from-to)1991-1997
Number of pages7
JournalIS and T International Symposium on Electronic Imaging Science and Technology
Volume36
Issue number12
DOIs
StatePublished - 2024
EventIS and T International Symposium on Electronic Imaging 2024: High Performance Computing for Imaging 2024 - San Francisco, United States
Duration: Jan 21 2024Jan 25 2024

Funding

This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan). The author Yuankai Huo would also like to acknowledge the support from the National Institute of Health under Grant R01DK135597.

Fingerprint

Dive into the research topics of 'Efficient Distributed Sequence Parallelism for Transformer-based Image Segmentation'. Together they form a unique fingerprint.

Cite this