Calculate gradient over different spacing than prescribed latitude/longitude grid in python - python

I want to use the numpy.gradient function to calculate gradient components of .nc4 variables like soil moisture/temperature. The grid spacing/resolution of my data is extremely small (around ~9km) and I was interested in calculating the gradient across a larger delta (like 100km). Is this possible to do using the gradient function alone or do I have to regrid my data to do this?

numpy.gradient is doing a 2-point centered difference approximation for the first derivative. If your data are 9km and you want a 100km estimate, you need to decide how you'd want that calculated. Fit a line to the data and take the slope? Fit some higher order curve? Essentially gradient is using the fewest points it can, but if you want it across 100km you have many more points and need to decide how best to use/reduce them.

Related

Python & Scipy: How to estimate a von mises scale factor with binned angle data?

I have an array of binned angle data, and another array of weights for each bin. I am using the vmpar() function found here in order to estimate the loc and kappa parameters. I then use the vmpdf() function, found in the same script, to create a von mises probability density function (pdf).
However, the vmpar function does not give me a scale parameter like the scipy vonmises.fit() function does. But I don't know how to use the vonmises.fit() function with binned data, since this function does not seem to accept weights as input.
My question is therefore: how do I estimate the scale from my binned angle data? The reason I want to adjust the scale is so that I can plot my original data and the pdf on the same graph. For example, now the pdf is not scaled to my original data, as seen in the image below (blue=original data, red line = pdf).
I am quite new to circular statistics, so perhaps there is a very easy way to implement this that I am overlooking. I need to figure out this asap, so I appreciate any help!

Python: How can I fit an ellipse to a cross-section of a peak in a 3D surface z = f(x,y)?

I have surface data Z over an [X,Y] mesh. In general Z = 0, but there will be peaks which stick up above this flat background, and these peaks will have roughly elliptical cross sections. These are diffraction intensity peaks, if anyone is curious. I would like to measure the elliptical cross section at about half the peak's maximum value.
So typically with diffraction, if it's a peak y = f(x), we want to look at the Full Width at Half Max (FWHM), which can be done by finding the peak's maximum, then intersecting the peak at that value and measuring the width. No problem.
Here I want to perform the analogous operation, but at higher dimension. If the peak had a circular cross section, then I could just do the FWHM = diameter of cross section. However, these peaks are elliptical, so I want to slice the peak at its half max and then fit an ellipse to the cross section. That way I can get the major and minor axes, inclination angle, and goodness of fit, all of which contain relevant information that a simple FWHM number would not provide.
I can hack together a way to do this, but it's slow and messy, so it feels like there must be a better way to do this. So my question really just comes down to, has anyone done this kind of problem before, and if so, are there any modules that I could use to perform the calculation quickly and with a simple, clean code?

Are there any Python options for 3D linear piecewise/segmented regression

I'm looking for a solution to fit a number of piecewise planes to linearly approximate a surface. Ideally the user could define the number of planes and the code would determine the "optimal" pieces of the data to fit them to.
There seems to be a number of 2D options discussed, e.g., here How to apply piecewise linear fit in Python? but nothing in 3D.
Thanks!

Python : detect dominant wavelength of a signal

I have this signal, for which I want to calculate the dominant wavelength, which would be the distance between the pronounced minima where the oscillations occure:
Which tool in scipy should I look into for this mission?
It depends where you get the data from.
If you only have the (x,y) points of the graph, you can either hack it by taking all the x corresponding to the minimal y (be careful of floating-point equalities, though), or use the Fourier transform, identify the main wave (the biggest amplitude) and deduce its wavelength. For the latter, you would use the Fast Fourier Transform from scipy: https://docs.scipy.org/doc/scipy-0.18.1/reference/tutorial/fftpack.html#fast-fourier-transforms
If you have the functional description of the function, either sample it like you do to construct the graph and apply the above, or take its derivative to find the minima mathematically (best method). You could also use scipy to find the minima numerically (https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.optimize.minimize.html), but you have to manually specify intervals that contain only one local minimum.

Define a 2D Gaussian probability with five peaks

I have a 2D data and it contains five peaks. Could I fit five 2D Gaussians function to obtain the peaks? In my problem, the peaks do not refer to the clustering problem. Which I think EM would be an appropriate answer for it.
In my case I measure a variable in x-y space and it shows maximum in more than one position. Is still fitting Fourier series or using Expectation-Maximization method an applicable solution to my problem?
In order to make my likelihood, do I need to just add up the five 2D Gaussians distributions with x and y and the height of each peak as variables?
If I understand what you're asking, check out Gaussian Mixture Models and Expectation Maximization. I don't know of any pre-implemented versions of these in Python, although I haven't looked too hard.

Categories

Resources