Page 1 of 1

Uncertainties calculations and reporting

Posted: Thu Apr 06, 2023 2:37 am
by thanasis

One of the features I was most happy to see in the new esfit version (and one often requested by reviewers) is the uncertainty calculation of fitted parameters.

Of course, it is also the case that most ES users (or of other software for that matter), including myself, are not fully trained in the details of what the statistical treatment actually does and means. Statisticians and metrologists often point out serious misconceptions casually appearing in research articles, as we tend to apply such tools more or less blindly.

In trying to assess how to use the results from the new esfit, I have been trying to better understand the key concepts. One key point that had escaped me all these years was that there is a trend to abandon the "error" approach (along with confidence intervals and significant digits if you believe it!) and adopt the "standard uncertainty" approach. Actually, there is a series of ISO guides to that effect, the most relevant of which being part 3, which includes reporting examples and practical gudelines.

The basic literature can be somewhat impenetrable, but from what I could decipher, Easyspin's error calculation using the covariance matrix is relevant to this new approach. The question is: do the "standard deviations" in fit1.pstd correspond to the "standard uncertainties" metrologists now advise us to use?


Re: Uncertainties calculations and reporting

Posted: Wed May 10, 2023 11:25 am
by Stefan Stoll

Thanks for posting these very useful links!

Indeed, EasySpin's esfit uses the covariance matrix approach. Basically, it determines the parameter values that maximize of the likelihood function, assuming zero-mean Gaussian noise. Then, it determines the curvature around this maximum, which yields the covariance matrix. The reported "standard deviation" of a parameter is taken as the square root of the corresponding diagonal element of the covariance matrix.

IMO, it is more useful to report the 95-percentile confidence interval rather than the standard deviation, since the former gives a more complete assessment of the fitting uncertainty.

One should also keep in mind that these are just statistical uncertainties from the fitting, and any systematic errors between the spin Hamiltonian model and the experimental data are not included.