Python : detect dominant wavelength of a signal - python

I have this signal, for which I want to calculate the dominant wavelength, which would be the distance between the pronounced minima where the oscillations occure:
Which tool in scipy should I look into for this mission?

It depends where you get the data from.
If you only have the (x,y) points of the graph, you can either hack it by taking all the x corresponding to the minimal y (be careful of floating-point equalities, though), or use the Fourier transform, identify the main wave (the biggest amplitude) and deduce its wavelength. For the latter, you would use the Fast Fourier Transform from scipy: https://docs.scipy.org/doc/scipy-0.18.1/reference/tutorial/fftpack.html#fast-fourier-transforms
If you have the functional description of the function, either sample it like you do to construct the graph and apply the above, or take its derivative to find the minima mathematically (best method). You could also use scipy to find the minima numerically (https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.optimize.minimize.html), but you have to manually specify intervals that contain only one local minimum.

Related

Any idea why FFT of rectangular pulse behaves like this?

If I do the FFT of the "usual" rectangular pulse function I get this "weird" result:
However, If I roll the same signal by half (f = np.roll(f, f.size//2)) and calculate the FFT, I get what I was expecting if I had used the non-rolled signal, i.e., get the sinc function as result:
By the way, if I do what most people do, i.e., if I plot the magnitude of the spectrum (instead of just the real part) of either signal (usual or rolled) I'll get exactly the same result (that resembles closer to the sinc).
I was expecting to get the sinc function from the real part of the FFT of the "usual" rectangular function.
Does anybody know why do I need to roll the rectangular function in order to produce the sinc function?
I'm using scipy's fft.
Thanks
This is simply the shift property of the FFT (and IFFT). If you circularly shift the data in one domain, it's the same as multiplying by a complex sinusoid in the other domain, with the frequency of that modulation proportional to the amount of the shift.
Works the same way for shifts in either the time domain or frequency domain, causing modulation in the other domain.
For "unshifted" results, the 0 point (or circular symmetry center for strictly real FFT results) usually needs to be at the first element of the FFT or IFFT input vector, not the middle.
See: https://www.dsprelated.com/freebooks/mdft/Shift_Theorem.html

scipy.interpolate.interp1d folding map

I am plotting the result of an interpolation in a periodic domain, namely, the earth mercator projection map, [0,2*pi] or [0,360] is the domain for longitude. As you can see on the picture below, I'm plotting a groundtrack.
I am getting first r, i.e. position, and then I'm projecting that right onto earth. Since the coordinate transformations involves trigonometric functions, the results that I obtain are certainly restricted to a domain, where the inverse is bijective. To obtain this plot I've used atan2 in order to obtain a non bijective inverse function, as well as manipulating arccos in order to extend the domain of the inverse function.
All good up to now. The fact is that when I interpolate the resulting points, naturally, the function that returns does not interpret the domain folding property.
I just wanted to know if there is any way around this, apart from manipulating my data and representing it in a non periodic domain, interpolate it, and after that applying %(2*np.pi). These option, even if is doable, implies touching even more those inverse functions. The other option I thought was interpolating in chunks of only increasing values, i.e. and concatenating them.
Nothing found on the scipy documentation.
Solved the issue implementing something like the following. Notice that I am using astropy units module.
adder = 2*np.pi*u.rad
for i in range(1,len(lons)):
if lons[i].value-lons[i-1].value > 1:
sgn=np.sign(lons[i].value-lons[i-1].value)
lons[i:] -= sgn*adder
after doing this, apply the %
f_lons = interp1d(t,lons)
lons = f_lons(new_t) % (2*np.pi)

How would you find the roots of a linear interpolation in Python?

I'm working on calculating the full width at half maximum for light curves with irregular shapes. My approach right now is to
fit a spline (scipy.interpolate.UnivariateSpline) to the data (minus half maximum, so that y = 0 is the half max)
find the roots of the spline (UnivariateSpline.roots())
find the difference between the first and last roots to determine the width of the curve at half max
The roots method is only available for cubic splines, but I need the spline to be linear, or else I get results like the image below (due to the spacing of the data points). I should note that I'm working with hundreds of datasets, so manually selecting these "roots" is not quite feasible.
Does anyone have any tricks to find the roots of a linear spline (or all of the x-values for a given y value)? Many thanks!
You can use make_interp_spline_(..., k=1) to get a BSpline object, convert to PPoly via PPoly.from_spline() and the result has a .roots method.
Alternatively, as other answers suggest, just find the relevant interval and solve for a root of linear segment.

Find relative scale in monocular Visual Odometry without PnP

I am implementing a standard VO algorithm, with a few changes, i.e. extract features, match the feature points, find the essential matrix and decompose to get the pose. After initialization however, instead of using 3D-2D motion estimation (PNP) for subsequent frames, I'm using the same 2D-2D motion estimation (using essential matrix). I find that 2D-2D estimation seems a lot more accurate than 3D-2D.
To find the relative scale of the second pose with respect to the first, I can find out the common points (that were triangulated for both frame pairs). According the Visual Odometry Tutorial, Scaramuzza, one can find the relative scale by finding the ratio of relative distances between common point pairs.
If f13D and f23D are the triangulated 3D points from subsequent framepairs, I choose point pairs in random and compute the distances, here is a rough code snippet for the same.
indices = np.random.choice(np.arange(0,len(f23D)), size=(5 * len(f23D),2),replace=True)
indices = indices[indices[...,0]!=indices[...,1]]
num = np.linalg.norm(f13D[indices[...,0]] - f13D[indices[...,1]], axis=1).reshape((len(indices),1))
den = np.linalg.norm(f23D[indices[...,0]] - f23D[indices[...,1]], axis=1).reshape((len(indices),1))
return np.median(num/den).
I have also tried replacing the last line with a linear ransac estimator. However, since scale triangulation is not perfect, these values are extremely noisy and thus the scale estimate also varies significantly, on using different numpy seeds.
Is this the right way to implement relative scale in monocular VO as described in the article? If not, what is the best way to do it (I do not wish to use PNP since rotation seems to be less accurate)

Calculate gradient over different spacing than prescribed latitude/longitude grid in python

I want to use the numpy.gradient function to calculate gradient components of .nc4 variables like soil moisture/temperature. The grid spacing/resolution of my data is extremely small (around ~9km) and I was interested in calculating the gradient across a larger delta (like 100km). Is this possible to do using the gradient function alone or do I have to regrid my data to do this?
numpy.gradient is doing a 2-point centered difference approximation for the first derivative. If your data are 9km and you want a 100km estimate, you need to decide how you'd want that calculated. Fit a line to the data and take the slope? Fit some higher order curve? Essentially gradient is using the fewest points it can, but if you want it across 100km you have many more points and need to decide how best to use/reduce them.

Categories

Resources