Abstract
As AI workloads increase in scope, generalization capability becomes challenging for small task-specific models and their demand for large amounts of labeled training samples increases. On the contrary, Foundation Models (FMs) are trained with internet-scale unlabeled data via self-supervised learning and have been shown to adapt to various tasks with minimal fine-tuning. Although large FMs have demonstrated significant impact in natural language processing and computer vision, efforts toward FMs for geospatial applications have been restricted to smaller size models, as pretraining larger models requires very large computing resources equipped with state-of-the-art hardware accelerators. Current satellite constellations collect 100+TBs of data a day, resulting in images that are billions of pixels and multimodal in nature. Such geospatial data poses unique challenges opening up new opportunities to develop FMs. We investigate billion scale FMs and HPC training profiles for geospatial applications by pretraining on publicly available data. We studied from end-to-end the performance and impact in the solution by scaling the model size. Our larger 3B parameter size model achieves up to 30% improvement in top1 scene classification accuracy when comparing a 100M parameter model. Moreover, we detail performance experiments on the Frontier supercomputer, America's first exascale system, where we study different model and data parallel approaches using PyTorch's Fully Sharded Data Parallel library. Specifically, we study variants of the Vision Transformer architecture (ViT), conducting performance analysis for ViT models with size up to 15B parameters. By discussing throughput and performance bottlenecks under different parallelism configurations, we offer insights on how to leverage such leadership-class HPC resources when developing large models for geospatial imagery applications.
Original language | English |
---|---|
Title of host publication | 2024 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2024 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 1036-1046 |
Number of pages | 11 |
ISBN (Electronic) | 9798350364606 |
DOIs | |
State | Published - 2024 |
Event | 2024 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2024 - San Francisco, United States Duration: May 27 2024 → May 31 2024 |
Publication series
Name | 2024 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2024 |
---|
Conference
Conference | 2024 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2024 |
---|---|
Country/Territory | United States |
City | San Francisco |
Period | 05/27/24 → 05/31/24 |
Funding
A.T. would like to thank Less Wright from PyTorch team for the valuable discussions. This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan). This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.
Keywords
- Distributed Training
- Foundation Models
- Geospatial
- Remote Sensing
- Vision Transformers