I'm working on calculating the full width at half maximum for light curves with irregular shapes. My approach right now is to
fit a spline (scipy.interpolate.UnivariateSpline) to the data (minus half maximum, so that y = 0 is the half max)
find the roots of the spline (UnivariateSpline.roots())
find the difference between the first and last roots to determine the width of the curve at half max
The roots method is only available for cubic splines, but I need the spline to be linear, or else I get results like the image below (due to the spacing of the data points). I should note that I'm working with hundreds of datasets, so manually selecting these "roots" is not quite feasible.
Does anyone have any tricks to find the roots of a linear spline (or all of the x-values for a given y value)? Many thanks!
You can use make_interp_spline_(..., k=1) to get a BSpline object, convert to PPoly via PPoly.from_spline() and the result has a .roots method.
Alternatively, as other answers suggest, just find the relevant interval and solve for a root of linear segment.
Related
I have surface data Z over an [X,Y] mesh. In general Z = 0, but there will be peaks which stick up above this flat background, and these peaks will have roughly elliptical cross sections. These are diffraction intensity peaks, if anyone is curious. I would like to measure the elliptical cross section at about half the peak's maximum value.
So typically with diffraction, if it's a peak y = f(x), we want to look at the Full Width at Half Max (FWHM), which can be done by finding the peak's maximum, then intersecting the peak at that value and measuring the width. No problem.
Here I want to perform the analogous operation, but at higher dimension. If the peak had a circular cross section, then I could just do the FWHM = diameter of cross section. However, these peaks are elliptical, so I want to slice the peak at its half max and then fit an ellipse to the cross section. That way I can get the major and minor axes, inclination angle, and goodness of fit, all of which contain relevant information that a simple FWHM number would not provide.
I can hack together a way to do this, but it's slow and messy, so it feels like there must be a better way to do this. So my question really just comes down to, has anyone done this kind of problem before, and if so, are there any modules that I could use to perform the calculation quickly and with a simple, clean code?
I am implementing a standard VO algorithm, with a few changes, i.e. extract features, match the feature points, find the essential matrix and decompose to get the pose. After initialization however, instead of using 3D-2D motion estimation (PNP) for subsequent frames, I'm using the same 2D-2D motion estimation (using essential matrix). I find that 2D-2D estimation seems a lot more accurate than 3D-2D.
To find the relative scale of the second pose with respect to the first, I can find out the common points (that were triangulated for both frame pairs). According the Visual Odometry Tutorial, Scaramuzza, one can find the relative scale by finding the ratio of relative distances between common point pairs.
If f13D and f23D are the triangulated 3D points from subsequent framepairs, I choose point pairs in random and compute the distances, here is a rough code snippet for the same.
indices = np.random.choice(np.arange(0,len(f23D)), size=(5 * len(f23D),2),replace=True)
indices = indices[indices[...,0]!=indices[...,1]]
num = np.linalg.norm(f13D[indices[...,0]] - f13D[indices[...,1]], axis=1).reshape((len(indices),1))
den = np.linalg.norm(f23D[indices[...,0]] - f23D[indices[...,1]], axis=1).reshape((len(indices),1))
return np.median(num/den).
I have also tried replacing the last line with a linear ransac estimator. However, since scale triangulation is not perfect, these values are extremely noisy and thus the scale estimate also varies significantly, on using different numpy seeds.
Is this the right way to implement relative scale in monocular VO as described in the article? If not, what is the best way to do it (I do not wish to use PNP since rotation seems to be less accurate)
I am working with Python for this problem.
Say I have some point p and a 1-dimensional arbitrary curve in an n-dimensional (compact) space. How can I find the closest point in the curve to my designated point p? I found an answer in Find minimum distance from point to complicated curve, but Shapely only works on planes and the expressions for the curves I am working with reside in spaces whose number of dimension ranges from 2 to 16 due to the number of parameters defining the curves.
The expressions of these curves are always known explicitly.
I also tried using scipy.optimize with SLSQP to minimize the distance function, but it is not always working. For example if the curve is np.sin(15*x), and the points are only in the unit square centered at (0.5, 0.5) there are parts of the curve that are inside the square in only one of the two dimensions and the minimization fails for some points.
If you know analytical form of curve, you always know the distance from point x(t), y(t) to your external point. And you can write distance in analytical form.
Thus, you need to find derivative from distance expression and find roots.
I've been tasked to develop an algorithm that, given a set of sparse points representing measurements of an existing surface, would allow us to compute the z coordinate of any point on the surface. The challenge is to find a suitable interpolation method that can recreate the 3D surface given only a few points and extrapolate values also outside of the range containing the initial measurements (a notorious problem for many interpolation methods).
After trying to fit many analytic curves to the points I've decided to use RBF interpolation as I thought this will better reproduce the surface given that the points should all lie on it (I'm assuming the measurements have a negligible error).
The first results are quite impressive considering the few points that I'm using.
Interpolation results
In the picture that I'm showing the blue points are the ones used for the RBF interpolation which produces the shape represented in gray scale. The red points are instead additional measurements of the same shape that I'm trying to reproduce with my interpolation algorithm.
Unfortunately there are some outliers, especially when I'm trying to extrapolate points outside of the area where the initial measurements were taken (you can see this in the upper right and lower center insets in the picture). This is to be expected, especially in RBF methods, as I'm trying to extract information from an area that initially does not have any.
Apparently the RBF interpolation is trying to flatten out the surface while I would just need to continue with the curvature of the shape. Of course the method does not know anything about that given how it is defined. However this causes a large discrepancy from the measurements that I'm trying to fit.
That's why I'm asking if there is any way to constrain the interpolation method to keep the curvature or use a different radial basis function that doesn't smooth out so quickly only on the border of the interpolation range. I've tried different combination of the epsilon parameters and distance functions without luck. This is what I'm using right now:
from scipy import interpolate
import numpy as np
spline = interpolate.Rbf(df.X.values, df.Y.values, df.Z.values,
function='thin_plate')
X,Y = np.meshgrid(np.linspace(xmin.round(), xmax.round(), precision),
np.linspace(ymin.round(), ymax.round(), precision))
Z = spline(X, Y)
I was also thinking of creating some additional dummy points outside of the interpolation range to constrain the model even more, but that would be quite complicated.
I'm also attaching an animation to give a better idea of the surface.
Animation
Just wanted to post my solution in case someone has the same problem. The issue was indeed with scipy implementation of the RBF interpolation. I tried instead to adopt a more flexible library, https://rbf.readthedocs.io/en/latest/index.html#.
The results are pretty cool! Using the following options
from rbf.interpolate import RBFInterpolant
spline = RBFInterpolant(X_obs, U_obs, phi='phs5', order=1, sigma=0.0, eps=1.)
I was able to get the right shape even at the edge.
Surface interpolation
I've played around with the different phi functions and here is the boxplot of the spread between the interpolated surface and the points that I'm testing the interpolation against (the red points in the picture).
Boxplot
With phs5 I get the best result with an average spread of about 0.5 mm on the upper surface and 0.8 on the lower surface. Before I was getting a similar average but with many outliers > 15 mm. Definitely a success :)
I have a 2D data and it contains five peaks. Could I fit five 2D Gaussians function to obtain the peaks? In my problem, the peaks do not refer to the clustering problem. Which I think EM would be an appropriate answer for it.
In my case I measure a variable in x-y space and it shows maximum in more than one position. Is still fitting Fourier series or using Expectation-Maximization method an applicable solution to my problem?
In order to make my likelihood, do I need to just add up the five 2D Gaussians distributions with x and y and the height of each peak as variables?
If I understand what you're asking, check out Gaussian Mixture Models and Expectation Maximization. I don't know of any pre-implemented versions of these in Python, although I haven't looked too hard.