Abstract
We present a comprehensive evaluation of proprietary and open-weights large language models using the first astronomy-specific benchmarking dataset. This dataset comprises 4,425 multiple-choice questions curated from the Annual Review of Astronomy and Astrophysics, covering a broad range of astrophysical topics.1 Our analysis examines model performance across various astronomical subfields and assesses response calibration, crucial for potential deployment in research environments. Claude-3.5-Sonnet outperforms competitors by up to 4.6 percentage points, achieving 85.0% accuracy. For proprietary models, we observed a universal reduction in cost every 3-to-12 months to achieve similar score in this particular astronomy benchmark. open-weights models have rapidly improved, with LLaMA-3-70b (80.6%) and Qwen-2-72b (77.7%) now competing with some of the best proprietary models. We identify performance variations across topics, with non-English-focused models generally struggling more in exoplanet-related fields, stellar astrophysics, and instrumentation related questions. These challenges likely stem from less abundant training data, limited historical context, and rapid recent developments in these areas. This pattern is observed across both open-weights and proprietary models, with regional dependencies evident, highlighting the impact of training data diversity on model performance in specialized scientific domains. Top-performing models demonstrate well-calibrated confidence, with correlations above 0.9 between confidence and correctness, though they tend to be slightly underconfident. The development for fast, low-cost inference of open-weights models presents new opportunities for affordable deployment in astronomy. The rapid progress observed suggests that LLM-driven research in astronomy may become feasible in the near future.
Original language | English |
---|---|
Article number | 100893 |
Journal | Astronomy and Computing |
Volume | 51 |
DOIs | |
State | Published - Apr 2025 |
Funding
This research was conducted using resources and services provided by the National Computational Infrastructure (NCI) , which receives support from the Australian Government, and the Oak Ridge Leadership Computing Facility Frontier Nodes . We are also grateful for support from Microsoft\u2019s Accelerating Foundation Models Research (AFMR) program , which played a crucial role in enabling this benchmarking work. The work at Argonne National Laboratory was supported by the U.S. Department of Energy, Office of High Energy Physics and Advanced Scientific Computing Research , through the SciDAC-RAPIDS2 institute. Argonne National Laboratory is a U.S. Department of Energy Office of Science Laboratory operated by UChicago Argonne LLC under contract no. DE-AC02-06CH11357 . The views expressed herein do not necessarily represent the views of the U.S. Department of Energy or the United States Government. This research was conducted using resources and services provided by the National Computational Infrastructure (NCI), Australia, which receives support from the Australian Government, and the Oak Ridge Leadership Computing Facility Frontier Nodes, United States, which is a DOE Office of Science User Facility at the Oak Ridge National Laboratory supported by the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. We are also grateful for support from Microsoft's Accelerating Foundation Models Research (AFMR) program, United States, which played a crucial role in enabling this benchmarking work. The work at Argonne National Laboratory was supported by the U.S. Department of Energy, Office of High Energy Physics and Advanced Scientific Computing Research, through the SciDAC-RAPIDS2 institute. Argonne National Laboratory is a U.S. Department of Energy Office of Science Laboratory operated by UChicago Argonne LLC, United States under contract no. DE-AC02-06CH11357. The views expressed herein do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
Keywords
- Astronomy
- Benchmarking
- Large Language Models
- Question Answering
- Scientific Knowledge Assessment