Orientation/field-dependent line widths
I am trying to simulate spectra such as reported by Eaton & Eaton here.
The main idea is that Tm times are orientation-dependent, hence in frozen-solution spectra they are effectively field-dependent. To the extent that line widths are Tm-determined, the net effect is that they will be different at different field-positions of the spectra.
In the case of V(IV) (S = 1/2, I = 7/2) which has a small g-anisotropy and the spectra are rather symmetrically distributed around a central resonance, it is not so much the tensor orientation, but the distance from the centre of the spectrum that determines the line widths: resonances along the same direction are narrower or broader depending on where they lie on the spectrum. Consequently, modelling with AStrain/gStrain/gAStrainCorr does not work.
I was wondering whether there might be a way to deal with this and impose an empirical linewidth = f(H) function.
One thought is to define a vector of length equal to that of the spectrum, vary it between 0 and 1 at the appropriate positions and modify the spectral amplitudes.
This is quite brute-force and rather bulky (and doesn't tackle the underlying issue of linewidth). Would there be a smarter way to achieve such a simulation?
Thanks!