Estimating Code Biases for Criticality Safety Applications with Few Relevant Benchmarks

Christopher M. Perfetti, Bradley T. Rearden

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

Criticality safety analyses rely on the availability of relevant benchmark experiments to determine justifiable margins of subcriticality. When a target application lacks neutronically similar benchmark experiments, validation studies must provide justification to the regulator that the impact of modeling and simulation limitations is well understood for the application and often must provide additional subcritical margin to ensure safe operating conditions. This study estimated the computational bias in the critical eigenvalue for several criticality safety applications supported by only a few relevant benchmark experiments. The accuracy of the following three methods for predicting computational biases was evaluated: the Upper Subcritical Limit STATisticS (USLSTATS) trending analysis method; the Whisper nonparametric method; and TSURFER, which is based on the generalized linear least-squares technique. These methods were also applied to estimate computational biases and recommended upper subcriticality limits for several critical experiments with known biases and for several cases from a blind benchmark study. The methods are evaluated based on both the accuracy of their predicted computation bias and upper subcriticality limit estimates, as well as on the consistency of the methods’ estimates, as the model parameters, covariance data libraries, and set of available benchmark data were varied. Data assimilation methods typically have not been used for criticality safety licensing activities, and this study explores a methodology to address concerns regarding the reliability of such methods in criticality safety bias prediction applications.

Original languageEnglish
Pages (from-to)1090-1128
Number of pages39
JournalNuclear Science and Engineering
Volume193
Issue number10
DOIs
StatePublished - Oct 3 2019

Funding

This paper has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the U.S. Department of Energy.

Keywords

  • computational bias estimation
  • Criticality safety
  • Monte Carlo
  • sensitivity analysis
  • upper subcriticality limit

Fingerprint

Dive into the research topics of 'Estimating Code Biases for Criticality Safety Applications with Few Relevant Benchmarks'. Together they form a unique fingerprint.

Cite this