Publications

Selected abstracts

(For a full list of publications see below)

Get on the BAND Wagon: A Bayesian Framework for Quantifying Model Uncertainties in Nuclear Dynamics

We describe the Bayesian Analysis of Nuclear Dynamics (BAND) framework, a cyberinfrastructure that we are developing which will unify the treatment of nuclear models, experimental data, and associated uncertainties. We overview the statistical principles and nuclear-physics contexts underlying the BAND toolset, with an emphasis on Bayesian methodology’s ability to leverage insight from multiple models. In order to facilitate understanding of these tools we provide a simple and accessible example of the BAND framework’s application. Four case studies are presented to highlight how elements of the framework will enable progress on complex, far-ranging problems in nuclear physics. By collecting notation and terminology, providing illustrative examples, and giving an overview of the associated techniques, this paper aims to open paths through which the nuclear physics and statistics communities can contribute to and build upon the BAND framework. [The R code used to generate the toy model figures is available as a zipfile.]

D.R. Phillips, R.J. Furnstahl, U. Heinz, T. Maiti, W. Nazarewicz, F.M. Nunes, M. Plumlee, M.T. Pratola, S. Pratt, F.G. Viens, and S.M. Wild

J. Phys. G 48, 072001 (2021)

Bayesian mixture model approach to quantifying the empirical nuclear saturation point

The equation of state (EOS) in the limit of infinite symmetric nuclear matter exhibits an equilibrium density at which the pressure vanishes and the energy per particle attains its minimum. Although not directly measurable, the nuclear saturation point (𝑛0,𝐸0) can be extrapolated by density-functional theory (DFT), providing tight constraints for microscopic interactions derived from chiral effective-field theory (EFT). We present a Bayesian mixture model that combines multiple DFT predictions for (𝑛0,𝐸0) using an efficient conjugate prior approach. The inferred posterior distribution for the saturation point’s mean and covariance matrix follows a normal-inverse-Wishart (NIW) class, resulting in posterior predictives in the form of correlated, bivariate 𝑡 distributions. The DFT uncertainty reports are then used to mix these posteriors using an ordinary Monte Carlo approach. Our Bayesian framework is publicly available, so practitioners can readily use and extend our results.

C. Drischler, P.G. Giuliani, S. Bezoui, J. Piekarewicz, F. Viens

Physical Review C 110, 044320 (2024)

Assessing correlated truncation errors in modern nucleon-nucleon potentials

We test the BUQEYE model of correlated effective field theory (EFT) truncation errors on Reinert, Krebs, and Epelbaum’s semilocal momentum-space implementation of the chiral EFT (𝜒⁢EFT) expansion of the nucleon-nucleon (NN) potential. This Bayesian model hypothesizes that dimensionless coefficient functions extracted from the order-by-order corrections to NN observables can be treated as draws from a Gaussian process (GP). We combine a variety of graphical and statistical diagnostics to assess when predicted observables have a 𝜒⁢EFT convergence pattern consistent with the hypothesized GP statistical model. All our results can be reproduced using a publicly available Jupyter notebook, which can be straightforwardly modified to analyze other 𝜒⁢EFT NN potentials.

P. J. Millican, R. J. Furnstahl, J. A. Melendez, D. R. Phillips, and M. T. Pratola

Physical Review C 110, 044002 (2024)

Simulation experiment design for calibration via active learning

Simulation models often have parameters as input and return outputs to understand the behavior of complex systems. Calibration is the process of estimating the values of the parameters in a simulation model in light of observed data from the system that is being simulated. When simulation models are expensive, emulators are built with simulation data as a computationally efficient approximation of an expensive model. An emulator then can be used to predict model outputs, instead of repeatedly running an expensive simulation model during the calibration process. Sequential design with an intelligent selection criterion can guide the process of collecting simulation data to build an emulator, making the calibration process more efficient and effective. This article proposes two novel criteria for sequentially acquiring new simulation data in an active learning setting by considering uncertainties on the posterior density of parameters. Analysis of several simulation experiments and real-data simulation experiments from epidemiology demonstrates that proposed approaches result in improved posterior and field predictions.

Özge Sürer

Journal of Quality Technology (2024)

Model orthogonalization and Bayesian forecast mixing via principal component analysis

One can improve predictability in the unknown domain by combining forecasts of imperfect complex computational models using a Bayesian statistical machine learning framework. In many cases, however, the models used in the mixing process are similar. In addition to contaminating the model space, the existence of such similar, or even redundant, models during the multimodeling process can result in misinterpretation of results and deterioration of predictive performance. In this paper we describe a method based on the principal component analysis that eliminates model redundancy. We show that by adding model orthogonalization to the proposed Bayesian model combination framework, one can arrive at better prediction accuracy and reach excellent uncertainty quantification performance.

P. Giuliani, K. Godbey, V. Kejzlar, W. Nazarewicz

Phys. Rev. R 6, 033266 (2024)

Taweret: a Python package for Bayesian model mixing

Uncertainty quantification using Bayesian methods is a growing area of research. Bayesian model mixing (BMM) is a recent development which combines the predictions from multiple models such that the fidelity of each model is preserved in the final result. Practical tools and analysis suites that facilitate such methods are therefore needed. Taweret1 introduces BMM to existing Bayesian uncertainty quantification efforts. Currently, Taweret contains three individual Bayesian model mixing techniques, each pertaining to a different type of problem structure; we encourage the future inclusion of user-developed mixing methods. Taweret’s first use case is in nuclear physics, but the package has been structured such that it should be adaptable to any research engaged in model comparison or model mixing.

K. Ingles, D. Liyanage, A. C. Semposki, J. C. Yannotty

J. Open Source Softw. 9(97), 6175 (2024)

ROSE: A reduced-order scattering emulator for optical models

A new generation of phenomenological optical potentials requires robust calibration and uncertainty quantification, motivating the use of Bayesian statistical methods. These Bayesian methods usually require calculating observables for thousands or even millions of parameter sets, making fast and accurate emulators highly desirable or even essential. Emulating scattering across different energies or with interactions such as optical potentials is challenging because of the nonaffine parameter dependence, meaning the parameters do not all factorize from individual operators. Here we introduce and demonstrate the reduced-order scattering emulator (ROSE) framework, a reduced basis emulator that can handle nonaffine problems. ROSE is fully extensible and works within the publicly available BAND framework software suite. As a demonstration problem, we use ROSE to calibrate a realistic nucleon-target scattering model through the calculation of elastic cross sections. This problem shows the practical value of the ROSE framework for Bayesian uncertainty quantification with controlled trade-offs between emulator speed and accuracy as compared to high-fidelity solvers.

D. Odell, P. Giuliani, K. Beyer, M. Catacora-Rios, M. Y.-H. Chan, E. Bonilla, R. J. Furnstahl, K. Godbey, and F. M. Nunes

Physical Review C 109, 044612 (2024)

Building trees for probabilistic prediction via scoring rules

Decision trees built with data remain in widespread use for nonparametric prediction. Predicting probability distributions is preferred over point predictions when uncertainty plays a prominent role in analysis and decision-making. We study modifying a tree to produce nonparametric predictive distributions. We find the standard method for building trees may not result in good predictive distributions and propose changing the splitting criteria for trees to one based on proper scoring rules. Analysis of both simulated data and several real datasets demonstrates that using these new splitting criteria results in trees with improved predictive properties considering the entire predictive distribution.

Sara Shashaani, Özge Sürer, Matthew Plumlee, and Seth Guikema.

Technometrics (2024)

Effective field theory for the bound states and scattering of a heavy charged particle and a neutral atom

We show the system of a heavy charged particle and a neutral atom can be described by a low-energy effective field theory where the attractive 1/r^4 induced dipole potential determines the long-distance, low-energy wave functions. The 1/r^4 interaction is renormalized by a contact interaction at leading order. Derivative corrections to that contact interaction give rise to higher-order terms. We show that this “induced-dipole EFT” (ID-EFT) reproduces the π+-hydrogen phase shifts of a more microscopic potential, the Temkin-Lamkin potential, over a wide range of energies. Already at leading order it also describes the highest-lying excited bound states of the pionic-hydrogen ion. Lower-lying bound states receive substantial corrections at next-to-leading order, with the size of the correction proportional to their distance from the scattering threshold. Our next-to-leading order calculation show that the three highest-lying bound states of the Temkin-Lamkin potential are well described in ID-EFT.

Daniel Odell, Daniel R. Phillips, and Ubirajara van Kolck

Physical Review C 108, 062817 (2023)

Model Mixing Using Bayesian Additive Regression Trees

We introduce a flexible Bayesian model mixing methodology using tree basis functions. In modern computer experiment applications, one often encounters the situation where various models of a physical system are considered, each implemented as a simulator on a computer. An important question in such a setting is determining the best simulator, or the best combination of simulators, to use for prediction and inference. Bayesian model averaging (BMA) and stacking are two statistical approaches used to account for model uncertainty by aggregating a set of predictions through a simple linear combination or weighted average, but this ignores the localized behavior of each simulator. This paper proposes a BMM model based on Bayesian Additive Regression Trees (BART) which has the desired flexibility to capture each simulators local behavior. The proposed methodology is applied to combine predictions from Effective Field Theories (EFTs) associated with a motivating nuclear physics application.

J.C. Yannotty, T.J. Santner, R.J. Furnstahl, M.T. Pratola.

Technometrics (2023)

Bayesian calibration of viscous anisotropic hydrodynamic simulations of heavy-ion collisions

Due to large pressure gradients at early times, standard hydrodynamic model simulations of relativistic heavy-ion collisions do not become reliable until O(1)fm/c after the collision. To address this one often introduces a pre-hydrodynamic stage that models the early evolution microscopically, typically as a conformal, weakly interacting gas. In such an approach the transition from the pre-hydrodynamic to the hydrodynamic stage is discontinuous, introducing considerable theoretical model ambiguity. Alternatively, fluids with large anisotropic pressure gradients can be handled macroscopically using the recently developed Viscous Anisotropic Hydrodynamics (VAH). In high-energy heavy-ion collisions VAH is applicable already at very early times, and at later times transitions smoothly into conventional second-order viscous hydrodynamics (VH). We present a Bayesian calibration of the VAH model with experimental data for Pb–Pb collisions at the LHC. We find that the VAH model has the unique capability of constraining the specific viscosities of the QGP at higher temperatures than other previously used models.

D. Liyanage, Ö. Sürer, M. Plumlee, S.M. Wild, U. Heinz.

Physical Review C 108, 054905 (2023)

Local Bayesian Dirichlet mixing of imperfect models

To improve the predictability of complex computational models in experimentally-unknown domains, we propose a Bayesian statistical machine learning framework that uses the Dirichlet distribution to combine results of several imperfect models. This framework can be viewed as an extension of Bayesian stacking. To illustrate the method, we study the ability of Bayesian model averaging and mixing techniques to mine nuclear masses. We show that the global and local mixtures of models reach excellent performance on both prediction accuracy and uncertainty quantification and are preferable to classical Bayesian model averaging. Additionally, our statistical analysis indicates that improving model predictions through mixing rather than mixing of corrected models leads to more robust extrapolations.

Vojta Kejzlar, Leo Neufcourt, and Witek Nazarewicz

Scientific Reports 13, 19600 (2023)

Sequential Bayesian experimental design for calibration of expensive simulation models

Simulation models of critical systems often have parameters that need to be calibrated using observed data. For expensive simulation models, calibration is done using an emulator of the simulation model built on simulation output at different parameter settings. Using intelligent and adaptive selection of parameters to build the emulator can drastically improve the efficiency of the calibration process. The article proposes a sequential framework with a novel criterion for parameter selection that targets learning the posterior density of the parameters. The emergent behavior from this criterion is that exploration happens by selecting parameters in uncertain posterior regions while simultaneously exploitation happens by selecting parameters in regions of high posterior density. The advantages of the proposed method are illustrated using several simulation experiments and a nuclear physics reaction model.

Özge Sürer, Matthew Plumlee, and Stefan M. Wild.

Technometrics (2023)

Deconvoluting experimental decay energy spectra: The 26O case

In nuclear reaction experiments, the measured decay energy spectra can give insights into the shell structure of decaying systems. However, extracting the underlying physics from the measurements is challenging due to detector resolution and acceptance effects. The Richardson-Lucy (RL) algorithm, a deblurring method that is commonly used in optics, was applied to our experimental nuclear physics data. The only inputs to the method are the observed energy spectrum and the detector’s response matrix. We demonstrate that the technique can help access information about the shell structure of particle-unbound systems from the measured decay energy spectrum that is not immediately accessible via traditional approaches such as χ-square fitting. For a similar purpose, we developed a deep neural network (DNN) classifier to identify resonance states from the measured decay energy spectrum. We tested the performance of both methods on simulated data and experimental measurements. Then, we applied both algorithms to the decay energy spectrum of 26O→24O+n+n measured via invariant mass spectroscopy. The resonance states restored using the RL algorithm to deblur the measured decay energy spectrum agree with those found by the DNN classifier. Both approaches suggest that the raw decay energy spectrum of 26O exhibits three peaks at approximately 0.15 MeV, 1.50 MeV, and 5.00 MeV, with half-widths of 0.29 MeV, 0.80 MeV, and 1.85 MeV, respectively.

Pierre Nzabahimana, Thomas Redpath, Thomas Baumann, Pawel Danielewicz, Pablo Giuliani, and Paul Guèye

Phys.Rev.C 107, 064315 (2023)

Constructing a simulation surrogate with partially observed output

Gaussian process surrogates are a popular alternative to directly using computationally expensive simulation models. When the simulation output consists of many responses, dimension-reduction techniques are often employed to construct these surrogates. However, surrogate methods with dimension reduction generally rely on complete output training data. This article proposes a new Gaussian process surrogate method that permits the use of partially observed output while remaining computationally efficient. The new method involves the imputation of missing values and the adjustment of the covariance matrix used for Gaussian process inference. The resulting surrogate represents the available responses, disregards the missing responses, and provides meaningful uncertainty quantification. The proposed approach is shown to offer sharper inference than alternatives in a simulation study and a case study where an energy density functional model that frequently returns incomplete output is calibrated.

Moses Y-H. Chan, Matthew Plumlee, and Stefan M. Wild.

Technometrics (2023)

BUQEYE guide to projection-based emulators in nuclear physics

The BUQEYE collaboration (Bayesian Uncertainty Quantification: Errors in Your effective field theory) presents a pedagogical introduction to projection-based, reduced-order emulators for applications in low-energy nuclear physics. The term emulator refers here to a fast surrogate model capable of reliably approximating high-fidelity models. As the general tools employed by these emulators are not yet well-known in the nuclear physics community, we discuss variational and Galerkin projection methods, emphasize the benefits of offline-online decompositions, and explore how these concepts lead to emulators for bound and scattering systems that enable fast and accurate calculations using many different model parameter sets. We also point to future extensions and applications of these emulators for nuclear physics, guided by the mature field of model (order) reduction. All examples discussed here and more are available as interactive, open-source Python code so that practitioners can readily adapt projection-based emulators for their own work.

Christian Drischler, Jordan Melendez, Dick Furnstahl, Alberto Garcia, and Xilin Zhang.

Front. Phys. 10, 1092931 (2023)

Bayes goes fast: Uncertainty quantification for a covariant energy density functional emulated by the reduced basis method

A covariant energy density functional is calibrated using a principled Bayesian statistical framework informed by experimental binding energies and charge radii of several magic and semi-magic nuclei. The Bayesian sampling required for the calibration is enabled by the emulation of the high-fidelity model through the implementation of a reduced basis method (RBM)—a set of dimensionality reduction techniques that can speed up demanding calculations involving partial differential equations by several orders of magnitude. The RBM emulator is able to accurately reproduce the model calculations in tens of milliseconds on a personal computer, an increase in speed of nearly a factor of 3,300. Besides the analysis of the posterior distribution of parameters, we present model calculations for masses and radii with properly estimated uncertainties. We also analyze the model correlation between the slope of the symmetry energy L and the neutron skin of 48Ca and 208Pb.

Pablo Giuliani, Kyle Godbey, Edgard Bonilla, Frederi Viens, and Jorge Piekarewicz.

Front. Phys. 10, 1054524 (2023)

ParMOO: A Python library for parallel multiobjective simulation optimization

ParMOO is a Python framework and library of solver components for building and deploying highly customized multiobjective simulation optimization solvers. ParMOO is designed to help engineers, practitioners, and optimization experts exploit available structures in how simulation outputs are used to formulate the objectives for a multiobjective optimization problem.

T.H. Chang and S.M. Wild

J. Open Source Softw. 8(82), 4468 (2023)

Variational inference with vine copulas: an efficient approach for Bayesian computer model calibration

With the advancements of computer architectures, the use of computational models proliferates to solve complex problems in many scientific applications such as nuclear physics and climate research. However, the potential of such models is often hindered because they tend to be computationally expensive. We develop a computationally efficient algorithm based on variational Bayes inference (VBI) for calibration of computer models with Gaussian processes. Unfortunately, the standard fast-to-compute gradient estimates based on subsampling are biased under the calibration framework due to the conditionally dependent data which diminishes the efficiency of VBI. In this work, we adopt a pairwise decomposition of the data likelihood using vine copulas that separate the information on dependence structure in data from their marginal distributions and leads to computationally efficient gradient estimates that are unbiased and thus scalable calibration. We provide empirical evidence for the computational scalability of our methodology together with average case analysis and all the necessary details for an efficient implementation of the proposed algorithm. We also demonstrate the opportunities given by our method for practitioners on a real data example through calibration of the Liquid Drop Model of nuclear binding energies.

V. Kejzlar and T. Maiti

Statistics and Computing 33, 18 (2022)

Measurement of 19F(p, γ)20Ne reaction suggests CNO breakout in first stars

Proposed mechanisms for the production of calcium in the first stars that formed out of the matter of the Big Bang are at odds with observations. Advanced nuclear burning and supernovae were thought to be the dominant source of the calcium production seen in all stars. Here we suggest a qualitatively different path to calcium production: breakout from the ‘warm’ carbon–nitrogen–oxygen (CNO) cycle. We report a direct experimental measurement of the 19F(p, γ)20Ne breakout reaction down to a very low energy of 186 kiloelectronvolts, and characterize a key resonance at 225 kiloelectronvolts. For temperatures of astrophysical interest–around 0.1 gigakelvin–this thermonuclear 19F(p, γ)20Ne rate is roughly a factor of seven larger than the previously recommended one. Our stellar models show a stronger breakout during stellar hydrogen burning than previously thought and may reveal the nature of calcium production in population III stars imprinted on the oldest known ultra-iron-poor star, SMSS0313-67086. This experimental result was obtained in the China JinPing Underground Laboratory7, which offers an environment with an extremely low cosmic-ray-induced background. Our rate showcases the effect that faint population III star supernovae can have on the nucleosynthesis observed in the oldest known stars and first galaxies, which are key mission targets of the James Webb Space Telescope.

L. Zhang,…, D. Odell et al.

Nature volume 610, pages 656–660 (2022)

Training and projecting: A reduced basis method emulator for many-body physics

We present the reduced basis method as a tool for developing emulators for equations with tunable parameters within the context of the nuclear many-body problem. The method uses a basis expansion informed by a set of solutions for a few values of the model parameters and then projects the equations over a well-chosen low-dimensional subspace. We connect some of the results in the eigenvector continuation literature to the formalism of reduced basis methods and show how these methods can be applied to a broad set of problems. As we illustrate, the possible success of the formalism on such problems can be diagnosed beforehand by a principal component analysis. We apply the reduced basis method to the one-dimensional Gross-Pitaevskii equation with a harmonic trapping potential and to nuclear density functional theory for 48Ca, achieving speed-ups of more than ×150 and ×250, respectively, when compared to traditional solvers. The outstanding performance of the approach, together with its straightforward implementation, show promise for its application to the emulation of computationally demanding calculations, including uncertainty quantification.

Edgard Bonilla, Pablo Giuliani, Kyle Godbey, and Dean Lee.

Phys. Rev. C 106, 054322 (2022)

Towards precise and accurate calculations of neutrinoless double-beta decay

We present the results of a National Science Foundation Project Scoping Workshop, the purpose of which was to assess the current status of calculations for the nuclear matrix elements governing neutrinoless double-beta decay and determine if more work on them is required. After reviewing important recent progress in the application of effective field theory, lattice quantum chromodynamics, and ab initio nuclear-structure theory to double-beta decay, we discuss the state of the art in nuclear-physics uncertainty quantification and then construct a roadmap for work in all these areas to fully complement the increasingly sensitive experiments in operation and under development. The roadmap includes specific projects in theoretical and computational physics as well as the use of Bayesian methods to quantify both intra- and inter-model uncertainties. The goal of this ambitious program is a set of accurate and precise matrix elements, in all nuclei of interest to experimentalists, delivered together with carefully assessed uncertainties. Such calculations will allow crisp conclusions from the observation or non-observation of neutrinoless double-beta decay, no matter what new physics is at play.

V. Cirigliano, Z. Davoudi, J. Engel, R. J. Furnstahl, G. Hagen, U. Heinz, and 17 other authors, including W. Nazarewicz, D. R. Phillips, M. Plumlee, F. Viens, and S. M. Wild.

J. Phys. G 49, 120502 (2022)

Interpolating between small- and large-g expansions using Bayesian model mixing

Bayesian model mixing (BMM) is a statistical technique that can be used to combine models that are predictive in different input domains into a composite distribution that has improved predictive power over the entire input space. We explore the application of BMM to the mixing of two expansions of a function of a coupling constant g that are valid at small and large values of g respectively. This type of problem is quite common in nuclear physics, where physical properties are straightforwardly calculable in strong and weak interaction limits or at low and high densities or momentum transfers, but difficult to calculate in between. Interpolation between these limits is often accomplished by a suitable interpolating function, e.g., Padé approximants, but it is then unclear how to quantify the uncertainty of the interpolant. We address this problem in the simple context of the partition function of zero-dimensional $\phi^4$ theory, for which the (asymptotic) expansion at small g and the (convergent) expansion at large g are both known. We consider three mixing methods: linear mixture BMM, localized bivariate BMM, and localized multivariate BMM with Gaussian processes. We find that employing a Gaussian process in the intermediate region between the two predictive models leads to the best results of the three methods. The methods and validation strategies we present here should be generalizable to other nuclear physics settings.

A. C. Semposki, R. J. Furnstahl, D. R. Phillips

Phys. Rev. C 106, 044002 (2022)

Uncertainty quantification in breakup reactions

Breakup reactions are one of the favored probes to study loosely bound nuclei near the limits of stability. In order to interpret such breakup experiments, the continuum discretized coupled channel method is typically used. In this study, the first Bayesian analysis of a breakup reaction model is performed. We use a combination of statistical methods together with a three-body reaction model to quantify the uncertainties on the breakup observables due to the parameters in the effective potential describing the loosely bound projectile of interest. The combination of tools we develop opens the path for a Bayesian analysis of a wide array of complex nuclear processes that require computationally intensive reaction models.

O. Surer, F. M. Nunes, M. Plumlee, S. M. Wild

Phys. Rev. C 106, 024607 (2022)

Model reduction methods for nuclear emulators

The field of model order reduction (MOR) is growing in importance due to its ability to extract the key insights from complex simulations while discarding computationally burdensome and superfluous information. We provide an overview of MOR methods for the creation of fast & accurate emulators of memory- and compute-intensive nuclear systems, focusing on eigen-emulators and variational emulators. As an example, we describe how ‘eigenvector continuation’ is a special case of a much more general and well-studied MOR formalism for parameterized systems. We continue with an introduction to the Ritz and Galerkin projection methods that underpin many such emulators, while pointing to the relevant MOR theory and its successful applications along the way. We believe that this guide will open the door to broader applications in nuclear physics and facilitate communication with practitioners in other fields.

J. A. Melendez, C. Drischler, R. J. Furnstahl, A. J. Garcia, Xilin Zhang

J. Phys. G. 49, 102001 (2022)

Analyzing rotational bands in odd-mass nuclei using effective field theory and Bayesian methods

We use a recently developed Effective Field Theory (EFT) for rotational bands in odd-mass nuclei to perform a Bayesian analysis of energy-level data in several nuclei. The error model in our Bayesian analysis includes both experimental and EFT truncation uncertainties. It also accounts for the fact that low-energy constants (LECs) at even and odd orders have different sizes. We use Markov Chain Monte Carlo sampling to explore the joint posterior of the EFT and error-model parameters and show both can be reliably determined. We extract the LECs up to fourth order in the EFT and find that, provided we correctly account for EFT truncation errors, results for lower-order LECs are stable as we go to higher orders. LEC results are also stable with respect to the addition of higher-energy data. We find a clear correlation between the extracted and the expected value of the inverse breakdown scale. The EFT turns out to converge markedly better than would be naively expected based on the scales of the problem

I.K. Alnamlah, E.A. Coello Pérez, D.R. Phillips

Front. in Phys. 10, 901954 (2022)

Performing Bayesian Analyses With AZURE2 Using BRICK: An Application to the 7Be System

Phenomenological R-matrix has been a standard framework for the evaluation of resolved resonance cross section data in nuclear physics for many years. It is a powerful method for comparing different types of experimental nuclear data and combining the results of many different experimental measurements in order to gain a better estimation of the true underlying cross sections. Yet a practical challenge has always been the estimation of the uncertainty on both the cross sections at the energies of interest and the fit parameters, which can take the form of standard level parameters. In this work, the emcee Markov Chain Monte Carlo sampler has been implemented for the R-matrix code AZURE2, creating the Bayesian R-matrix Inference Code Kit (BRICK). Bayesian uncertainty estimation has then been carried out for a simultaneous R-matrix fit of a capture and scattering reaction in the 7Be system.

D. Odell, C. R. Brune, D. R. Phillips, R. J. deBoer, S. N. Paneru

Front. in Phys. 10, 888746 (2022)

Colloquium: Machine Learning in Nuclear Physics

Advances in machine learning methods provide tools that have broad applicability in scientific research. These techniques are being applied across the diversity of nuclear physics research topics, leading to advances that will facilitate scientific discoveries and societal applications. This Colloquium provides a snapshot of nuclear physics research, which has been transformed by machine learning techniques.

A. Boehnlein, M. Diefenthaler, C. Fanelli, M. Hjorth-Jensen, T. Horn, M. P. Kuchera, D. Lee, W. Nazarewicz, K. Orginos, P. Ostroumov, L.-G. Pang, A. Poon, N. Sato, M. Schram, A. Scheinker, M. S. Smith,X.-N. Wang, V. Ziegler

Rev. Mod. Phys. 94, 031003 (2022)

Fast emulation of quantum three-body scattering

We develop a class of emulators for solving quantum three-body scattering problems based on combining the variational method for scattering observables and eigenvector continuation. The emulators are first trained by the exact scattering solutions for a small number of parameter sets, and then employed to make interpolations and extrapolations in the parameter space. Using a schematic nuclear-physics model with finite-range two and three-body interactions, we demonstrate the emulators to be extremely accurate and efficient. The general strategies used here may be applicable for building the same type of emulators in other fields, wherever variational methods can be developed for evaluating physical models.

Xilin Zhang and R. J. Furnstahl

Phys. Rev. C 105, 064004 (2022)

Statistical correlations of nuclear quadrupole deformations and charge radii

The statistical correlations between nuclear deformations and charge radii of different nuclei are affected by the underlying shell structure. Even for well deformed and superfluid nuclei for which these observables change smoothly, the correlation range is rather short. This result suggests that the frequently made assumption of reduced statistical errors for the differences between smoothly-varying observables cannot be generally justified.

Paul-Gerhard Reinhard and Witold Nazarewicz

Phys. Rev. C 106, 014303 (2022)

Prehydrodynamic evolution and its impact on quark-gluon plasma signatures

State-of-the-art hydrodynamic models of heavy-ion collisions have considerable theoretical model uncertainties in the description of the very early pre-hydrodynamic stage. We add a new computational module, KTIso, that describes the pre-hydrodynamic evolution kinetically, based on the relativistic Boltzmann equation with collisions treated in the Isotropization Time Approximation. As a novelty, KTIso allows for the inclusion and evolution of initial-state momentum anisotropies. To maintain computational efficiency KTIso assumes strict longitudinal boost invariance and allows collisions to isotropize only the transverse momenta. We use it to explore the sensitivity of hadronic observables measured in relativistic heavy-ion collisions to initial-state momentum anisotropies and microscopic scattering during the pre-hydrodynamic stage.

D. Liyanage, D. Everett, C. Chattopadhyay, U. Heinz

Pnys. Rev. C 105, 064908 (2022)

The Interplay of Femtoscopic and Charge-Balance Correlations

Correlations driven by the constraints of local charge conservationprovide insight into the chemical evolution and diffusivity of the high-temperature matter created in ultra-relativistic heavy ion collisions. Two-particle correlations driven by final-state interactions have allowed the extraction of critical femtoscopic space-time information about the expansion and dissolution of the same collisions. As first steps toward a Bayesian analysis of charge-balance functions, this study quantifies the contribution from final-state interactions, which needs to be subtracted in order to quantitatively infer the diffusivity and chemical evolution of the QGP. As seen in the figure, the correction from final-state interactions is small.

Scott Pratt and Karina Martirosova

Phys. Rev. C 105, 054906 (2022)

Nudged elastic band approach to nuclear fission pathways

The nuclear fission process is a dramatic example of the large-amplitude collective motion in which the nucleus undergoes a series of shape changes before splitting into distinct fragments. This motion can be represented by a pathway in the many-dimensional space of collective coordinates. Within a stationary framework rooted in a static collective Schrödinger equation, the collective action along the fission pathway determines the spontaneous fission half-lives as well as mass and charge distributions of fission fragments. We study the performance and precision of various methods to determine the minimum-action and minimum-energy fission trajectories in two- and three-dimensional collective spaces. These methods include the nudged elastic band method (NEB), grid-based methods, and the Euler-Lagrange approach to the collective action minimization. The NEB method is capable of efficient determination of the exit points on the outer turning surface that characterize the most probable fission pathway and constitute the key input for fission studies. The NEB method will be particularly useful in large-scale static fission calculations of superheavy nuclei and neutron-rich fissioning nuclei contributing to the astrophysical r-process recycling.

Eric Flynn, Daniel Lay, Sylvester Agbemava, Pablo Giuliani, Kyle Godbey, Witold Nazarewicz, Jhilam Sadhukhan

Phys. Rev. C 105, 054302 (2022)

Effective field theory analysis of 3He-alpha scattering data

We treat low-energy 3He-alpha elastic scattering in an Effective Field Theory (EFT) that exploits the separation of scales in this reaction. We compute the amplitude up to Next-to-Next-to-Leading Order (NNLO), developing a hierarchy of the effective-range parameters that contribute at various orders. We use the resulting formalism to analyze data for recent measurements at center-of-mass energies of 0.38-3.12 MeV using the SONIK gas target at TRIUMF as well as older data in this energy regime. We employ a likelihood function that incorporates the theoretical uncertainty due to truncation of the EFT and use Markov Chain Monte Carlo sampling to obtain the resulting posterior probability distribution. We find that the inclusion of a small amount of data on the analysing power $A_y$ is crucial to determine the sign of the p-wave splitting in such an analysis. The combination of Ay and SONIK data constrains all effective-range parameters up to O(p^4) in both s- and p-waves quite well. The ANCs and s-wave scattering length are consistent with a recent EFT analysis of the capture reaction 3He(alpha,gamma)7Be.

M. Poudel, D. R. Phillips

J. Phys. G 49, 045102 (2022)

Black Box Variational Bayesian Model Averaging

For many decades now, Bayesian Model Averaging (BMA) has been a popular framework to systematically account for model uncertainty that arises in situations when multiple competing models are available to describe the same or similar physical process. The implementation of this framework, however, comes with a multitude of practical challenges including posterior approximation via Markov Chain Monte Carlo and numerical integration. We present a Variational Bayesian Inference approach to BMA as a viable alternative to the standard solutions which avoids many of the aforementioned pitfalls. The proposed method is “black box” in the sense that it can be readily applied to many models with little to no model-specific derivation. We illustrate the utility of our variational approach on a suite of examples and discuss all the necessary implementation details. Fully documented Python code with all the examples is provided as well.

V. Kejzlar, S. Bhattacharya, M. Son, T. Maiti

The American Statistician (2022)

Efficient emulation of relativistic heavy ion collisions with transfer learning

Measurements from the Large Hadron Collider (LHC) and the Relativistic Heavy Ion Collider (RHIC) can be used to study the properties of quark-gluon plasma. Systematic constraints on these properties must combine measurements from different collision systems and methodically account for experimental and theoretical uncertainties. Such studies require a vast number of costly numerical simulations. While computationally inexpensive surrogate models (“emulators”) can be used to efficiently approximate the predictions of heavy ion simulations across a broad range of model parameters, training a reliable emulator remains a computationally expensive task. We use transfer learning to map the parameter dependencies of one model emulator onto another, leveraging similarities between different simulations of heavy ion collisions. By limiting the need for large numbers of simulations to only one of the emulators, this technique reduces the numerical cost of comprehensive uncertainty quantification when studying multiple collision systems and exploring different models.

D. Liyanage, Y. Ji, D. Everett, M. Heffernan, U. Heinz, S. Mak, J-F. Paquet

Physical Review C 105, 034910 (2022)

Statistical tools for a better optical model

Modern statistical tools provide the ability to compare the information content of observables and provide a path to explore which experiments would be most useful to give insight into and constrain theoretical models. In this work we study three such tools, (i) the principal component analysis, (ii) the sensitivity analysis based on derivatives, and (iii) the Bayesian evidence. This is done in the context of nuclear reactions with the goal of constraining the optical potential. We first apply these tools to a toy-model case. Then we consider two different reaction observables, elastic angular distributions and polarization data for reactions on 48Ca and 208Pb at two different beam energies. For the toy-model case, we find significant discrimination power in the sensitivities and the Bayesian evidence, showing clearly that the volume imaginary term is more useful to describe scattering at higher energies. When comparing between elastic cross sections and polarization data using realistic optical models, sensitivity studies indicate that both observables are roughly equally sensitive but the variability of the optical model parameters is strongly angle dependent. The Bayesian evidence shows some variability between the two observables, but the Bayes factor obtained is not sufficient to discriminate between angular distributions and polarization.

M. Catacora-Rios, G. B. King, A. E. Lovell, and F. M. Nunes

Physical Review C 104, 064611 (2021)

Rigorous constraints on three-nucleon forces in chiral effective field theory from fast and accurate calculations of few-body observables

We explore the constraints on the three-nucleon force (3NF) of chiral effective field theory (ChiEFT) that are provided by bound-state observables in the A=3 and A=4 sectors. Our statistically rigorous analysis incorporates experimental error, computational method uncertainty, and the uncertainty due to truncation of the ChiEFT expansion at next-to-next-to-leading order. A consistent solution for the 3H binding energy, the 4He binding energy and radius, and the 3H beta-decay rate can only be obtained if ChiEFT truncation errors are included in the analysis. The beta-decay rate is the only one of these that yields a nondegenerate constraint on the 3NF low-energy constants, which makes it crucial for the parameter estimation. We use eigenvector continuation for fast and accurate emulation of no-core shell model calculations of the few-nucleon observables. This facilitates sampling of the posterior probability distribution, allowing us to also determine the distributions of the parameters that quantify the truncation error. We find a ChiEFT expansion parameter of Q=0.33 ± 0.06 for these observables.

S. Wesolowski, I. Svensson, A. Ekström, C. Forssén, R. J. Furnstahl, J. A. Melendez, and D. R. Phillips

Physical Review C 104, 064001 (2021)

Precision measurement of lightweight self-conjugate nucleus 80Zr

Protons and neutrons in the atomic nucleus move in shells analogous to the electronic shell structures of atoms. The nuclear shell structure varies as a result of changes in the nuclear mean field with the number of neutrons N and protons Z, and these variations can be probed by measuring the mass differences between nuclei. The N = Z = 40 self-conjugate nucleus 80Zr is of particular interest, as its proton and neutron shell structures are expected to be very similar, and its ground state is highly deformed. Here we provide evidence for the existence of a deformed double-shell closure in 80Zr through high-precision Penning trap mass measurements of 80–83Zr. Our mass values show that 80Zr is substantially lighter, and thus more strongly bound than predicted. This can be attributed to the deformed shell closure at N = Z = 40 and the large Wigner energy. A statistical Bayesian-model mixing analysis employing several global nuclear mass models demonstrates difficulties with reproducing the observed mass anomaly using current theory.

A. Hamaker, E. Leistenschneider, R. Jain, G. Bollen, S. A. Giuliani, K. Lund, W. Nazarewicz, L. Neufcourt, C. R. Nicoloff, D. Puentes, R. Ringle, C. S. Sumithrarachchi, and I. T. Yandow

Nature Physics (2021)

Toward emulating nuclear reactions using eigenvector continuation

We construct an efficient emulator for two-body scattering observables using the general (complex) Kohn variational principle and trial wave functions derived from eigenvector continuation. The emulator simultaneously evaluates an array of Kohn variational principles associated with different boundary conditions, which allows for the detection and removal of spurious singularities known as Kohn anomalies. When applied to the K-matrix only, our emulator resembles the one constructed by Furnstahl et al. (2020) although with reduced numerical noise. After a few applications to real potentials, we emulate differential cross sections for 40Ca(n,n) scattering based on a realistic optical potential and quantify the model uncertainties using Bayesian methods. These calculations serve as a proof of principle for future studies aimed at improving optical models.

C. Drischler, M. Quinonez, P.G. Giuliani, A.E. Lovell, and F.M. Nunes

Physics Letters B 823, 136777 (2021)

Does Bayesian Model Averaging improve polynomial extrapolations? Two toy problems as tests

We assess the accuracy of Bayesian polynomial extrapolations from small parameter values to large ones. We employ Bayesian Model Averaging (BMA) to combine results from different order polynomials. Our study considers two “toy problems” where the underlying function used to generate data sets is known. We use Bayesian parameter estimation to extract the polynomial coefficients and BMA different polynomial degrees by weighting each according to its Bayesian evidence. We compare the predictive performance of this Bayesian Model Average with that of the individual polynomials.

M. A. Connell, I. Billig, and D. R. Phillips

J. Phys. G 48, 104001 (2021)

Efficient emulators for scattering using eigenvector continuation

Eigenvector continuation (EC), which accurately and efficiently reproduces ground states for targeted sets of Hamiltonian parameters, is extended to scattering using the Kohn variational principle. Proofs-of-principle imply EC will be a valuable emulator for applying Bayesian inference to parameter estimation constrained by scattering observables.

R.J. Furnstahl, A.J. Garcia, P.J. Millican, and Xilin Zhang

Physics Letters B 809, 135719 (2020)

Fast & accurate emulation of two-body scattering observables without wave functions

We combine Newton’s variational method with ideas from eigenvector continuation to construct a fast & accurate emulator for two-body scattering observables. The emulator will facilitate the application of rigorous statistical methods for interactions that depend smoothly on a set of free parameters. When used to emulate the neutron-proton cross section with a modern chiral interaction as a function of 26 free parameters, it reproduces the exact calculation with negligible error and provides an over 300x improvement in CPU time.

J.A. Melendez, C. Drischler, A.J. Garcia, R.J. Furnstahl, and Xilin Zhang

Physics Letters B 821, 136608 (2021)

Machine-Learning-Based Inversion of Nuclear Responses

A microscopic description of the interaction of atomic nuclei with external electroweak probes is required for elucidating aspects of short-range nuclear dynamics and for the correct interpretation of neutrino oscillation experiments. Nuclear quantum Monte Carlo methods infer the nuclear electroweak response functions from their Laplace transforms. Inverting the Laplace transform is a notoriously ill-posed problem; and Bayesian techniques, such as maximum entropy, are typically used to reconstruct the original response functions in the quasielastic region. In this work, we present a physics-informed artificial neural network architecture suitable for approximating the inverse of the Laplace transform. Utilizing simulated, albeit realistic, electromagnetic response functions, we show that this physics-informed artificial neural network outperforms maximum entropy in both the low-energy transfer and the quasielastic regions, thereby allowing for robust calculations of electron scattering and neutrino scattering on nuclei and inclusive muon capture rates.

K. Raghavan, P. Balaprakash, A. Lovato, N. Rocco, and S.M. Wild

Phys. Rev. C 103, 035502 (2021)

 

All BAND Publications

Get on the BAND Wagon: A Bayesian Framework for Quantifying Model Uncertainties in Nuclear Dynamics
D.R. Phillips, R.J. Furnstahl, U. Heinz, T. Maiti, W. Nazarewicz, F.M. Nunes, M. Plumlee, M.T. Pratola, S. Pratt, F.G. Viens, and S.M. Wild
J. Phys. G 48, 072001 (2021)

Bayesian mixture model approach to quantifying the empirical nuclear saturation point
C. Drischler, P.G. Giuliani, S. Bezoui, J. Piekarewicz, F. Viens
Physical Review C 110, 044320 (2024)

Assessing correlated truncation errors in modern nucleon-nucleon potentials
P. J. Millican, R. J. Furnstahl, J. A. Melendez, D. R. Phillips, and M. T. Pratola
Physical Review C 110, 044002 (2024)

Simulation experiment design for calibration via active learning
Özge Sürer
Journal of Quality Technology (2024)

Model orthogonalization and Bayesian forecast mixing via principal component analysis
P. Giuliani, K. Godbey, V. Kejzlar, W. Nazarewicz
Phys. Rev. R 6, 033266 (2024)

Taweret: a Python package for Bayesian model mixing
K. Ingles, D. Liyanage, A. C. Semposki, J. C. Yannotty
J. Open Source Softw. 9(97), 6175 (2024)

ROSE: A reduced-order scattering emulator for optical models
D. Odell, P. Giuliani, K. Beyer, M. Catacora-Rios, M. Y.-H. Chan, E. Bonilla, R. J. Furnstahl, K. Godbey, and F. M. Nunes
Physical Review C 109, 044612 (2024)

Building trees for probabilistic prediction via scoring rules
Sara Shashaani, Özge Sürer, Matthew Plumlee, and Seth Guikema.
Technometrics (2024)

Effective field theory for the bound states and scattering of a heavy charged particle and a neutral atom
Daniel Odell, Daniel R. Phillips, and Ubirajara van Kolck
Physical Review C 108, 062817 (2023)

Absolute cross section of the 12C(p,γ)13N reaction
K-U. Kettner, H. W. Becker, C. R. Brune, R. J. deBoer, J. Görres, D. Odell, D. Rogalla, and M. Wiescher
Physical Review C 108, 035805 (2023)

Model Mixing Using Bayesian Additive Regression Trees
J.C. Yannotty, T.J. Santner, R.J. Furnstahl, M.T. Pratola.
Technometrics (2023)

Bayesian calibration of viscous anisotropic hydrodynamic simulations of heavy-ion collisions
D. Liyanage, Ö. Sürer, M. Plumlee, S.M. Wild, U. Heinz.
Physical Review C 108, 054905 (2023)

Local Bayesian Dirichlet mixing of imperfect models
Vojta Kejzlar, Leo Neufcourt, and Witek Nazarewicz
Scientific Reports 13, 19600 (2023)

Sequential Bayesian experimental design for calibration of expensive simulation models
Özge Sürer, Matthew Plumlee, and Stefan M. Wild.
Technometrics (2023)

Deconvoluting experimental decay energy spectra: The 26O case
Pierre Nzabahimana, Thomas Redpath, Thomas Baumann, Pawel Danielewicz, Pablo Giuliani, and Paul Guèye
Phys.Rev.C 107, 064315 (2023)

Constructing a simulation surrogate with partially observed output
Moses Y-H. Chan, Matthew Plumlee, and Stefan M. Wild.
Technometrics (2023)

BUQEYE guide to projection-based emulators in nuclear physics
Christian Drischler, Jordan Melendez, Dick Furnstahl, Alberto Garcia, and Xilin Zhang.
Front. Phys. 10, 1092931 (2023)

Bayes goes fast: Uncertainty quantification for a covariant energy density functional emulated by the reduced basis method
Pablo Giuliani, Kyle Godbey, Edgard Bonilla, Frederi Viens, and Jorge Piekarewicz.
Front. Phys. 10, 1054524 (2023)

ParMOO: A Python library for parallel multiobjective simulation optimization
T.H. Chang and S.M. Wild
J. Open Source Softw. 8(82), 4468 (2023)

Variational inference with vine copulas: an efficient approach for Bayesian computer model calibration
V. Kejzlar and T. Maiti
Statistics and Computing 33, 18 (2022)

First near-threshold measurements of the 13C(α,n1)16O reaction for low-background-environment characterization
R. J. deBoer,…, D. Odell et al.
Phys.Rev.C 106, 055808 (2022)

Direct measurement of the astrophysical 19F(p,αγ)16O reaction in a deep-underground laboratory
L. Zhang,…, D. Odell et al.
Phys.Rev.C 106 (2022)

Measurement of 19F(p, γ)20Ne reaction suggests CNO breakout in first stars
L. Zhang,…, D. Odell et al.
Nature volume 610, pages 656–660 (2022)

Investigation of direct capture in the 23Na(p,γ)24Mg reaction
A. Boeltzig,…, D.Odell et al.
Phys. Rev. C 106, 045801 (2022)

Investigation of the 10B(p,α)7Be reaction from 0.8 to 2.0 MeV
B. Vande Kolk,…, D. Odell et al.
Phys. Rev. C 105, 055802 (2022)

Training and projecting: A reduced basis method emulator for many-body physics
Edgard Bonilla, Pablo Giuliani, Kyle Godbey, and Dean Lee.
Phys. Rev. C 106, 054322 (2022)

Towards precise and accurate calculations of neutrinoless double-beta decay
V. Cirigliano, Z. Davoudi, J. Engel, R. J. Furnstahl, G. Hagen, U. Heinz, and 17 other authors, including W. Nazarewicz, D. R. Phillips, M. Plumlee, F. Viens, and S. M. Wild.
J. Phys. G 49, 120502 (2022)

Interpolating between small- and large-g expansions using Bayesian model mixing
A. C. Semposki, R. J. Furnstahl, D. R. Phillips
Phys. Rev. C 106, 044002 (2022)

Uncertainty quantification in breakup reactions
O. Surer, F. M. Nunes, M. Plumlee, S. M. Wild
Phys. Rev. C 106, 024607 (2022)

Model reduction methods for nuclear emulators
J. A. Melendez, C. Drischler, R. J. Furnstahl, A. J. Garcia, Xilin Zhang
J. Phys. G. 49, 102001 (2022)

Analyzing rotational bands in odd-mass nuclei using effective field theory and Bayesian methods
I.K. Alnamlah, E.A. Coello Pérez, D.R. Phillips
Front. in Phys. 10, 901954 (2022)

Performing Bayesian Analyses With AZURE2 Using BRICK: An Application to the 7Be System
D. Odell, C. R. Brune, D. R. Phillips, R. J. deBoer, S. N. Paneru
Front. in Phys. 10, 888746 (2022)

Colloquium: Machine Learning in Nuclear Physics
A. Boehnlein, M. Diefenthaler, C. Fanelli, M. Hjorth-Jensen, T. Horn, M. P. Kuchera, D. Lee, W. Nazarewicz, K. Orginos, P. Ostroumov, L.-G. Pang, A. Poon, N. Sato, M. Schram, A. Scheinker, M. S. Smith,X.-N. Wang, V. Ziegler
Rev. Mod. Phys. 94, 031003 (2022)

Fast emulation of quantum three-body scattering
Xilin Zhang and R. J. Furnstahl
Phys. Rev. C 105, 064004 (2022)

Statistical correlations of nuclear quadrupole deformations and charge radii
Paul-Gerhard Reinhard and Witold Nazarewicz
Phys. Rev. C 106, 014303 (2022)

Prehydrodynamic evolution and its impact on quark-gluon plasma signatures
D. Liyanage, D. Everett, C. Chattopadhyay, U. Heinz
Pnys. Rev. C 105, 064908 (2022)

The Interplay of Femtoscopic and Charge-Balance Correlations
Scott Pratt and Karina Martirosova
Phys. Rev. C 105, 054906 (2022)

Nudged elastic band approach to nuclear fission pathways
Eric Flynn, Daniel Lay, Sylvester Agbemava, Pablo Giuliani, Kyle Godbey, Witold Nazarewicz, Jhilam Sadhukhan
Phys. Rev. C 105, 054302 (2022)

Effective field theory analysis of 3He-alpha scattering data
M. Poudel, D. R. Phillips
J. Phys. G 49, 045102 (2022)

Black Box Variational Bayesian Model Averaging
V. Kejzlar, S. Bhattacharya, M. Son, T. Maiti
The American Statistician (2022)

Efficient emulation of relativistic heavy ion collisions with transfer learning
D. Liyanage, Y. Ji, D. Everett, M. Heffernan, U. Heinz, S. Mak, J-F. Paquet
Physical Review C 105, 034910 (2022)

Statistical tools for a better optical model
M. Catacora-Rios, G. B. King, A. E. Lovell, and F. M. Nunes
Physical Review C 104, 064611 (2021)

Rigorous constraints on three-nucleon forces in chiral effective field theory from fast and accurate calculations of few-body observables
S. Wesolowski, I. Svensson, A. Ekström, C. Forssén, R. J. Furnstahl, J. A. Melendez, and D. R. Phillips
Physical Review C 104, 064001 (2021)

Precision measurement of lightweight self-conjugate nucleus 80Zr
A. Hamaker, E. Leistenschneider, R. Jain, G. Bollen, S. A. Giuliani, K. Lund, W. Nazarewicz, L. Neufcourt, C. R. Nicoloff, D. Puentes, R. Ringle, C. S. Sumithrarachchi, and I. T. Yandow
Nature Physics (2021)

Toward emulating nuclear reactions using eigenvector continuation
C. Drischler, M. Quinonez, P.G. Giuliani, A.E. Lovell, and F.M. Nunes
Physics Letters B 823, 136777 (2021)

Does Bayesian Model Averaging improve polynomial extrapolations? Two toy problems as tests
M. A. Connell, I. Billig, and D. R. Phillips
J. Phys. G 48, 104001 (2021)

Efficient emulators for scattering using eigenvector continuation
R.J. Furnstahl, A.J. Garcia, P.J. Millican, and Xilin Zhang
Physics Letters B 809, 135719 (2020)

Fast & accurate emulation of two-body scattering observables without wave functions
J.A. Melendez, C. Drischler, A.J. Garcia, R.J. Furnstahl, and Xilin Zhang
Physics Letters B 821, 136608 (2021)

Machine-Learning-Based Inversion of Nuclear Responses
K. Raghavan, P. Balaprakash, A. Lovato, N. Rocco, and S.M. Wild
Phys. Rev. C 103, 035502 (2021)