Is In-Context Learning Feasible for HPC Performance Autotuning?

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

We examine whether in-context learning with Large Language Models (LLMs) can effectively address the challenges of High-Performance Computing (HPC) autotuning. LLMs have demonstrated remarkable natural language processing and artificial intelligence (AI) capabilities, sparking interest in their application across various domains, including HPC. Performance autotuning - the process of automatically optimizing system configurations to maximize efficiency through empirical evaluation - offers significant promise for enhancing application performance on larger systems and emerging architectures. However, this process remains computationally expensive due to the combinatorial explosion of configuration parameters and the complex, nonlinear relationships between configurations and performance outcomes.We pose a critical question: Can LLMs, without task-specific fine-tuning, accurately infer performance-configuration patterns by combining in-context examples with latent knowledge? To explore this, we leverage empirical performance data from real-world HPC systems, designing structured prompts and queries to evaluate LLMs' capabilities. Our experiments reveal inherent limitations in applying in-context learning to performance autotuning, particularly for tasks requiring precise mathematical reasoning and analysis of complex multivariate dependencies. We provide empirical evidence of these shortcomings and discuss potential research directions to overcome these challenges.

Original languageEnglish
Title of host publicationProceedings - 2025 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2025
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages978-985
Number of pages8
ISBN (Electronic)9798331526436
DOIs
StatePublished - 2025
Event2025 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2025 - Milan, Italy
Duration: Jun 3 2025Jun 7 2025

Publication series

NameProceedings - 2025 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2025

Conference

Conference2025 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2025
Country/TerritoryItaly
CityMilan
Period06/3/2506/7/25

Keywords

  • HPC
  • In Context Learning
  • LLM
  • Performance Autotuning

Fingerprint

Dive into the research topics of 'Is In-Context Learning Feasible for HPC Performance Autotuning?'. Together they form a unique fingerprint.

Cite this