Modelling S-shaped curves in Python - python

Apologies beforehand, since I'm relatively new to Python.
I have the following data for a reaction describing the growth of a compound:
data
The first derivative of this S-shaped curve describing the reaction seems to resemble an F-distribution curve. In my understanding, a cumulative distribution function (CDF) is an integral of a distribution curve, thus I was hoping to fit a CDF like F (10,10) to fit and model my reaction 1 (resembling the bottom right function of the image attached below).
cdf
The formula to describe such curve shape is written as follows:
Formula CDF F-distribution
Thus my question is: How can I write this formula in a pythonic way? NOTE: I've tried fitting different types of logistic functions, but none are fitted correctly. The CDF like F, however, seems to properly describe reaction 1.
Thanks a lot for the help!!

Related

Python & Scipy: How to estimate a von mises scale factor with binned angle data?

I have an array of binned angle data, and another array of weights for each bin. I am using the vmpar() function found here in order to estimate the loc and kappa parameters. I then use the vmpdf() function, found in the same script, to create a von mises probability density function (pdf).
However, the vmpar function does not give me a scale parameter like the scipy vonmises.fit() function does. But I don't know how to use the vonmises.fit() function with binned data, since this function does not seem to accept weights as input.
My question is therefore: how do I estimate the scale from my binned angle data? The reason I want to adjust the scale is so that I can plot my original data and the pdf on the same graph. For example, now the pdf is not scaled to my original data, as seen in the image below (blue=original data, red line = pdf).
I am quite new to circular statistics, so perhaps there is a very easy way to implement this that I am overlooking. I need to figure out this asap, so I appreciate any help!

explanation of sklearn optics plot

I am currently learning how to use OPTICS in sklearn. I am inputting a numpy array of (205,22). I am able to get plots out of it, but I do not understand how I am getting a 2d plot out of multiple dimensions and how I am supposed to read it. I more or less understand the reachability plot, but the rest of it makes no sense to me. Can someone please explain what is happening. Is the function just simplifying the data to two dimensions somehow? Thank you
From the sklearn user guide:
The reachability distances generated by OPTICS allow for variable density extraction of clusters within a single data set. As shown in the above plot, combining reachability distances and data set ordering_ produces a reachability plot, where point density is represented on the Y-axis, and points are ordered such that nearby points are adjacent. ‘Cutting’ the reachability plot at a single value produces DBSCAN like results; all points above the ‘cut’ are classified as noise, and each time that there is a break when reading from left to right signifies a new cluster.
the other three plots are a visual representation of the actual clusters found by three different algorithms.
as you can see in the OPTICS Clustering plot there are two high density clusters (blue and cyan) the gray crosses acording to the reachability plot are classify as noise because of the low xi value
in the DBSCAN clustering with eps = 0.5 everithing is considered noise since the epsilon value is to low and the algorithm can not found any density points.
Now it is obvious that in the third plot the algorithm found just a single cluster because of the adjustment of the epsilon value and everything above the 2.0 line is considered noise.
please refer to the user guide:

I’m trying to programme a plot of the 2D Schrödinger equation in python using the finite differences method

I have began to write a programme that calculates solutions to Schrödingers equation in 2D using the finite differences method. I would like to display the solutions graphically using a contour plot or some other graphical display by taking in user input for the dimensions and number of grid points.
I have simplified the Schrödinger equation by setting hbar^2/2m to 1 and setting the potentiel (V) equal to 0 which gives:
-(dψ^2/dx^2 + dψ^2/dy^2) = E*ψ
Using the finite differences method the left hand side of the equation becomes the matrix of the form:
enter image description here
So this now becomes an eigenvalue problem which is the part I’m having trouble implementing.
After using the command np.linalg.eig to get the eigenvalues and eigenvectors I’m unsure of how to code a graphical interpretation of these solutions in 2D. Any help will be much appreciated.
Basically I want to use the eigenvalues and eigenvector to graphically display the solutions I just don’t know which to use and how to code it.
Cheers
You need to be more precise in your question. You say you don't know which to use, eigenvalues or eigenvectors, but that depends on what you want to plot.
Do you want to plot the energies of the quantum mechanical system? Those are represented by the eigenvalues of the Hamiltonian operator.
Do you want to plot the states that you observe your system to be in when you measure the energy? Those are given by the eigenvectors.
After finding the relevant quantity, what is it you want to plot? If it's just the energies, then you can use matplotlib's heatmap tools to show the energy as a function of x and y. If it's the states of definite energy, then you could use some of the vector field tools that matplotlib provides.

Deconvoluting a Voigt fitting in Python to extract Lorentzian FWHM?

I've found loads of examples of how to fit a Voigt profile to spectral data but I'm wondering if there's a way to deconvolute it and specifically determine the FWHM of the Lorentzian component.
I know if you do Voigt fitting in Origin, it returns a load of data including the Gaussian and Lorentzian FWHM but I'm trying to figure out how to do this in Python.
Any help is very appreciated.
You should look at Vaczi 2014 article "A New, Simple Approximation for the Deconvolution of Instrumental Broadening in Spectroscopic Band Profiles".
He discussed resolving Lorentz width by removal/deconvolution of instrument-based Gauss broadening from measured Voight width/broadening. The best formula was in Eq. (10) based on Kielkopf/Olivero which results in the maximum relative error of 3.8E-3. His suggested Eq. (5b) has a maximum relative error of 1.2E-2 (1.2%) and should be avoided.

Linear Regression from a .csv file in matplotlib

Can someone explain how to make a scatter plot and linear regression from an excel file?
I know how to import the the file with pandas, I know how to do a scatter plot by plugging in my own data in matplotlib, but I don't know how to make python do all three from the file.
Ideally it would also give r value, p value, std error, slope and intercept.
I'm very new to all of this and any help would be great.
I've searched around stack overflow, reddit, and else where, but I haven't found anything recent.
SciPy has a basic linear regression function that fits your criteria: scipy.stats.linregress Just use the appropriate columns from your DataFrame as x and y.
Pyplot's basic plt.plot(x, y) function will give you a line: matplotlib.pyplot.plot. You can compute a set of y values using the slope and intercept.

Categories

Resources