Abstract
Fine-tuning existing LLMs for specialized tasks has become a very attractive alternative due to its low cost and quick development cycle. With many pre-trained LLMs available, it is an increasingly complex task to choose the correct model as the starting point or base model. In this work we discuss ChatPORT - a specialized fine-tuned LLM geared towards providing correctly translated codes from one programming model to another. We evaluate a number of base models and compare and contrast their features and characteristics that make them a viable starting point. In this paper, we focus on the OpenMP offload porting capabilities of ChatPORT. We build our training data using kernels from the Heterogeneous Computing Benchmarks (HeCBench) [12] and the OpenMP Validation and Verification suite [5] to fine-tune the base models. We then test the model using unseen kernels extracted from the HeCBench benchmark suite. Our results show that: (1) not all open LLMs geared towards HPC are aware of programming models like OpenMP, (2) although all base models benefit from fine-tuning they learn differently and produce different correctness rates, (3) depending on the memory size and compute resource available, different base models can be used for fine-tuning without significantly affecting the quality of transpiled code they generate, (4) fine-tuning improved the correctness rate of the LLM by an average of 43.2%, and (5) feedback-based training data further increased the correctness rate by an average of 6% over the LLMs tested.
| Original language | English |
|---|---|
| Title of host publication | OpenMP |
| Subtitle of host publication | Balancing Productivity and Performance Portability - 21st International Workshop on OpenMP, IWOMP 2025, Proceedings |
| Editors | Yonghong Yan, Erik Saule, Michael Klemm, Bronis R. de Supinski, Jannis Klinkenberg, Swaroop Pophale |
| Publisher | Springer Science and Business Media Deutschland GmbH |
| Pages | 197-211 |
| Number of pages | 15 |
| ISBN (Print) | 9783032063427 |
| DOIs | |
| State | Published - 2026 |
| Event | 21st International Workshop on OpenMP, IWOMP 2025 - Charlotte, United States Duration: Oct 1 2025 → Oct 3 2025 |
Publication series
| Name | Lecture Notes in Computer Science |
|---|---|
| Volume | 16123 LNCS |
| ISSN (Print) | 0302-9743 |
| ISSN (Electronic) | 1611-3349 |
Conference
| Conference | 21st International Workshop on OpenMP, IWOMP 2025 |
|---|---|
| Country/Territory | United States |
| City | Charlotte |
| Period | 10/1/25 → 10/3/25 |
Funding
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, through “Advancements in Artificial Intelligence for Science”, DE-FOA-0003264, under award number DE-SC0025645 and contract numbers ERKJ442.
Keywords
- Code Porting
- LLM
- OpenMP offloading