Deconvoluting a Voigt fitting in Python to extract Lorentzian FWHM? - python

I've found loads of examples of how to fit a Voigt profile to spectral data but I'm wondering if there's a way to deconvolute it and specifically determine the FWHM of the Lorentzian component.
I know if you do Voigt fitting in Origin, it returns a load of data including the Gaussian and Lorentzian FWHM but I'm trying to figure out how to do this in Python.
Any help is very appreciated.

You should look at Vaczi 2014 article "A New, Simple Approximation for the Deconvolution of Instrumental Broadening in Spectroscopic Band Profiles".
He discussed resolving Lorentz width by removal/deconvolution of instrument-based Gauss broadening from measured Voight width/broadening. The best formula was in Eq. (10) based on Kielkopf/Olivero which results in the maximum relative error of 3.8E-3. His suggested Eq. (5b) has a maximum relative error of 1.2E-2 (1.2%) and should be avoided.

Related

Python & Scipy: How to estimate a von mises scale factor with binned angle data?

I have an array of binned angle data, and another array of weights for each bin. I am using the vmpar() function found here in order to estimate the loc and kappa parameters. I then use the vmpdf() function, found in the same script, to create a von mises probability density function (pdf).
However, the vmpar function does not give me a scale parameter like the scipy vonmises.fit() function does. But I don't know how to use the vonmises.fit() function with binned data, since this function does not seem to accept weights as input.
My question is therefore: how do I estimate the scale from my binned angle data? The reason I want to adjust the scale is so that I can plot my original data and the pdf on the same graph. For example, now the pdf is not scaled to my original data, as seen in the image below (blue=original data, red line = pdf).
I am quite new to circular statistics, so perhaps there is a very easy way to implement this that I am overlooking. I need to figure out this asap, so I appreciate any help!

Modelling S-shaped curves in Python

Apologies beforehand, since I'm relatively new to Python.
I have the following data for a reaction describing the growth of a compound:
data
The first derivative of this S-shaped curve describing the reaction seems to resemble an F-distribution curve. In my understanding, a cumulative distribution function (CDF) is an integral of a distribution curve, thus I was hoping to fit a CDF like F (10,10) to fit and model my reaction 1 (resembling the bottom right function of the image attached below).
cdf
The formula to describe such curve shape is written as follows:
Formula CDF F-distribution
Thus my question is: How can I write this formula in a pythonic way? NOTE: I've tried fitting different types of logistic functions, but none are fitted correctly. The CDF like F, however, seems to properly describe reaction 1.
Thanks a lot for the help!!

Find relative scale in monocular Visual Odometry without PnP

I am implementing a standard VO algorithm, with a few changes, i.e. extract features, match the feature points, find the essential matrix and decompose to get the pose. After initialization however, instead of using 3D-2D motion estimation (PNP) for subsequent frames, I'm using the same 2D-2D motion estimation (using essential matrix). I find that 2D-2D estimation seems a lot more accurate than 3D-2D.
To find the relative scale of the second pose with respect to the first, I can find out the common points (that were triangulated for both frame pairs). According the Visual Odometry Tutorial, Scaramuzza, one can find the relative scale by finding the ratio of relative distances between common point pairs.
If f13D and f23D are the triangulated 3D points from subsequent framepairs, I choose point pairs in random and compute the distances, here is a rough code snippet for the same.
indices = np.random.choice(np.arange(0,len(f23D)), size=(5 * len(f23D),2),replace=True)
indices = indices[indices[...,0]!=indices[...,1]]
num = np.linalg.norm(f13D[indices[...,0]] - f13D[indices[...,1]], axis=1).reshape((len(indices),1))
den = np.linalg.norm(f23D[indices[...,0]] - f23D[indices[...,1]], axis=1).reshape((len(indices),1))
return np.median(num/den).
I have also tried replacing the last line with a linear ransac estimator. However, since scale triangulation is not perfect, these values are extremely noisy and thus the scale estimate also varies significantly, on using different numpy seeds.
Is this the right way to implement relative scale in monocular VO as described in the article? If not, what is the best way to do it (I do not wish to use PNP since rotation seems to be less accurate)

How to find the critical points, jacobian matrix and eigen values in python for a set of autonomous equations ? How to get their phase space plot?

I'm new to python. i have a set of autonomous equation, trying to analyse the asymptotic behaviour using phase space analysis.
f(x,y)=a*x*y((y**2)+a+c)
g(x,y)=a+(y**3)+((y**3)+(y**2)(x+a))
where xand y are the variables ? i seek help to find the critical points, jacobian and eigen values, also to get the phase space plot ?
Use the symPy Library.
It has a built in symbolic solver for Jacobians.
You can use the Eq method to solve for your critical points.
Use the eigenvals method to find the eigenvals of your Jacobian.
Lastly you can employ the quiver module from Scipy along with pyplot to plot your phase space plot. Good luck.

Skewed gaussian distribution within an ellipse with python

Okay, so I've been pulling some hairs out over this for the last couple of days and haven't made much progress.
I want to generate a 2-D array (grid) of gaussian-like distribution on an elliptical domain. Why do I say gaussian-like?, well I want an asymmetric gaussian, aka skewed gaussian where the peak of the gaussian-like surface is at some point x0,y0 within the ellipse and the values on the perimeter of the ellipse are zero (or approaching zero...).
The attached picture might describe what I mean a little better.

Categories

Resources