Histogram Equalization of 1d values and reverse transform in python - python

I have a set of 1D values. When I plot the histogram of the values, I notice that they're not uniformly distributed. Can I find a non-linear mapping, such that the transformed scores are uniformly distributed? I also need the reverse transform function as well.
One way I know is to do histogram equalization like in images. Is there any inbuilt function in python to achieve this?

Related

Python & Scipy: How to estimate a von mises scale factor with binned angle data?

I have an array of binned angle data, and another array of weights for each bin. I am using the vmpar() function found here in order to estimate the loc and kappa parameters. I then use the vmpdf() function, found in the same script, to create a von mises probability density function (pdf).
However, the vmpar function does not give me a scale parameter like the scipy vonmises.fit() function does. But I don't know how to use the vonmises.fit() function with binned data, since this function does not seem to accept weights as input.
My question is therefore: how do I estimate the scale from my binned angle data? The reason I want to adjust the scale is so that I can plot my original data and the pdf on the same graph. For example, now the pdf is not scaled to my original data, as seen in the image below (blue=original data, red line = pdf).
I am quite new to circular statistics, so perhaps there is a very easy way to implement this that I am overlooking. I need to figure out this asap, so I appreciate any help!

How to plot density-contours on a ternary diagram in Python?

I have [x,y,z] data to plot on a ternary diagram, that of which I would like to plot the contours of based on their density in [x,y,z]-space. I have my data stored in a list of ((x1,y1,z1), (x2,y2,z2), ect..), and also in individual data-frame columns.
I see many options (using Marc Harper's function, plotly's 'create_ternary_contour', ect...) for plotting contours based on a 4th dimension (usually output values of a function of x,y,z), but I haven't found a solution to define them based on density. I think what I would like is analogous to the 2D solution available with hist2d and/or contour/contourf using a KDE approach... but on a ternary diagram.
Does anyone know how to do this? I suspect I would have to make some sort of grid in the ternary geometry, and then evaluate the KDE of the [x,y,z] data and define contours based on this somehow? I found a similar question here, but it is unfortunately in R, not Python.

Pick values from a CDF curve

everyone,
I have a generic values distribution. I post the graph.
Is there a way to generate a CDF from these values? Using sns I can create a graph:
My goal is to assign a value to the y-axis and take a value from the x-axis from the CDF. I'm searching online but can't find a method that doesn't require going through curve normalisation.
I'm not sure of the exact data format, but something like numpy.cumsum will take a numpy array that represents a PDF and turn it into an array that represents the CDF.
From there, with your array of p and cdf it is straightforward to find the p value that gives the cdf (which is what I understand you are looking for) with some interpolation with "nearest" as the type of interpolation (see the documentation on scipy.interpolate.interp1d for example).

Is there any way to transform RGB values of a gradient into values within [0,1] in python?

I'm looking for a way to transform different RGB values of a color gradient into single values ranging from 0 to 1 in python. My current approach is transforming the RGB values into single greyscale values and then scaling those values, but I'm aware that some data information gets lost in this process. So I was wondering if any of you could suggest a way to directly translate RGB values into single values and then scaling those.
I'm doing this because I'm currently training CNNs on colored images generated with matplotlib. pyplot using matshow and a color bar. The images I'm using look somewhat like this:
gramian angular field of sinusoidal data
They are generated by transforming a 1-dimensional array of values ranging from 0 to 1 into a Gramian angular field.
Looking forward to any suggestions! Cheers!

Higher order interpolation for contour plots in python

Is anybody of you aware of a higher order interpolation method (Catmull-Rom splines, cubic interpolation, etc.) for 2D contouring in Python?
Skimage, Matplotlib, and OpenCV provide the functions measure.find_contours(), contours() and findContours() respectively, but all are based on linear interpolation (also known as marching squares), I'm looking into something with higher accuracy in Python, preferably. Any pointers would be highly appreciated.
https://www.dropbox.com/s/orgr2yqhbbk2xnr/test.PNG
In the image above I'm trying to extract iso-value 25 from the scalar field of f(x,y)=x^3+y^3. I'm looking for 6 points with better accuracy than the 6 red points given by linear interpolation.
For unstructured 2d-data (or triangulated data), you might be interested by the following class:
http://matplotlib.org/api/tri_api.html?highlight=cubictriinterpolator#matplotlib.tri.CubicTriInterpolator
which provides a Clough-Tocher (cubic) interpolator from a user-defined Triangulation and field defined at triangulation nodes. It can also be used through the helper class UniformTriRefiner:
http://matplotlib.org/api/tri_api.html?highlight=refine_field#matplotlib.tri.UniformTriRefiner.refine_field
http://matplotlib.org/mpl_examples/pylab_examples/tricontour_smooth_user.png
Nevertheless the choice of the adapted interpolation depends of course of your data set.

Categories

Resources