Interpolation with bounded curvature - python

With the code below, I use scipy.interpolate.splprep routine to interpolate a set of points using B-splines. Evidently, this curve in the figure on the left is quite "sharp" near the 6th point: it's curvature is too large (see right figure).
I want the curvature to be limited to <10. I can improve this by increasing the smoothness factor s, e.g. setting it to s=8 gives:
Which satisfies my curvature bound. However, I currently have to find this smoothness factor s through trial and error (also, higher s does not necessarily imply a lower curvature). Is there anyway I can explicitely bound the curvature? I know it is theoretically possible based on this question.
Code (Python fiddle)

This is only a suggestion, but one thing that is possible as an alternative to deliberate smoothing is introduction of additional (fake) data points through which the curve should pass.
The condition to insert such points is to detect if there is a sharp reversal in direction between any two consecutive (real) points. This could be done by comparing the angle between directions of vectors containing any two consecutive nodes and if this angle is lower than certain threshold, the point is considered a 'reversal' point.
Once such reversal points are identified you can introduce P (fake) points, where the integer P depends on how smooth you would like these transitions to be. E.g. if you don't mind a U-shape, then you could introduce just one fake point per each reversal point that is slightly offset in the direction orthogonal to the reversal direction.

One known method is to interpolate points using Dubins paths. It provides the shortest curve between any two curves, while satisfying a minimum curvature. One implementation is described in this paper, called Markov-Dubins interpolation.
However, I do not recommend this approach because the curves are not smooth (there are discontinuities in the acceleration due to fixed levels of curvature).

Not at the moment, at least not in scipy. Sadly.

Related

Sort coordinates of pointcloud by distance to previous point

Pointcloud of rope with desired start and end point
I have a pointcloud of a rope-like object with about 300 points. I'd like to sort the 3D coordinates of that pointcloud, so that one end of the rope has index 0 and the other end has index 300 like shown in the image. Other pointclouds of that object might be U-shaped so I can't sort by X,Y or Z coordinate. Because of that I also can't sort by the distance to a single point.
I have looked at KDTree by sklearn or scipy to compute the nearest neighbour of each point but I don't know how to go from there and sort the points in an array without getting double entries.
Is there a way to sort these coordinates in an array, so that from a starting point the array gets appended with the coordinates of the next closest point?
First of all, obviously, there is no strict solution to this problem (and even there is no strict definition of what you want to get). So anything you may write will be a heuristic of some sort, which will be failing in some cases, especially as your point cloud gets some non-trivial form (do you allow loops in your rope, for example?)
This said, a simple approach may be to build a graph with the points being the vertices, and every two points connected by an edge with a weight equal to the straight-line distance between these two points.
And then build a minimal spanning tree of this graph. This will provide a kind of skeleton for your point cloud, and you can devise any simple algorithm atop of this skeleton.
For example, sort all points by their distance to the start of the rope, measured along this tree. There is only one path between any two vertices of the tree, so for each vertex of the tree calculate the length of the single path to the rope start, and sort all the vertices by this distance.
As suggested in other answer there is no strict solution to this problem and there can be some edge cases such as loop, spiral, tube, but you can go with heuristic approaches to solve for your use case. Read about some heuristic approaches such as hill climbing, simulated annealing, genetic algorithms etc.
For any heuristic approach you need a method to find how good is a solution, let's say if i give you two array of 3000 elements how will you identify which solution is better compared to other ? This methods depends on your use case.
One approach at top of my mind, hill climbing
method to measure the goodness of the solution : take the euclidian distance of all the adjacent elements of array and take the sum of their distance.
Steps :
create randomised array of all the 3000 elements.
now select two random index out of these 3000 and swap the elements at those indexes, and see if it improves your ans (if sum of euclidian distance of adjacent element reduces)
If it improves your answer then keep those elements swapped
repeat step 2/3 for large number of epochs(10^6)
This solution will lead into stagnation as there is lack of diversity. For better results use simulated annealing, genetic algorithms.

how to set one fitting parameter larger than the other as constraints in iminuit in python?

I have two related fitting parameters. They have the same fitting range. Let's call them r1 and r2. I know I can limit the fitting range using minuit.limits, but I have an additional criteria that r2 has to be smaller than r1, can I do that in iminuit?
I've found this, I hope this can help you!
Extracted from: https://iminuit.readthedocs.io/en/stable/faq.html
**Can I have parameter limits that depend on each other (e.g. x^2 + y^2 < 3)?**¶
MINUIT was only designed to handle box constrains, meaning that the limits on the parameters are independent of each other and constant during the minimisation. If you want limits that depend on each other, you have three options (all with caveats), which are listed in increasing order of difficulty:
Change the variables so that the limits become independent. For example, transform from cartesian coordinates to polar coordinates for a circle. This is not always possible, of course.
Use another minimiser to locate the minimum which supports complex boundaries. The nlopt library and scipy.optimize have such minimisers. Once the minimum is found and if it is not near the boundary, place box constraints around the minimum and run iminuit to get the uncertainties (make sure that the box constraints are not too tight around the minimum). Neither nlopt nor scipy can give you the uncertainties.
Artificially increase the negative log-likelihood in the forbidden region. This is not as easy as it sounds.
The third method done properly is known as the interior point or barrier method. A glance at the Wikipedia article shows that one has to either run a series of minimisations with iminuit (and find a clever way of knowing when to stop) or implement this properly at the level of a Newton step, which would require changes to the complex and convoluted internals of MINUIT2.
Warning: you cannot just add a large value to the likelihood when the parameter boundary is violated. MIGRAD expects the likelihood function to be differential everywhere, because it uses the gradient of the likelihood to go downhill. The derivative at a discrete step is infinity and zero in the forbidden region. MIGRAD does not like this at all.

Piecewise polynomial order and knot number+position optimization for a given maximum error

I'm trying to approximate a digital filter impulse response with a set of piecewise polynomials:
The number of segment(knot) is a free parameter on the entire interval [0,1). To give perspective of the problem size, I'm expecting something like 256 to 1024 segments for a good approximation.
The knot positions have to fall on a power of 2 integer grid on the interval [0,1] for easy hardware implementation of the polynomial selection.
The polynomial order for each segment can be different, the lower the better. The maximum order is known (could be set to 2, or 3).
The length of each segment does not need to be equal as long as (2) is obeyed.
For example a linear segment on [0, 1/256) followed by a 3rd order segment on [1/256, 22/256) followed by a 2nd order segment on [22/256, 1) would be fine.
The goal is to minimize some kind of combination of the number of segments and their order to reduce overall computation/memory cost (tradeoff to be defined), while the mean square or maximum error between fitted curve and ideal is below a given value.
I know I could brute force search the entire space and calculate the max error for each allowed polynomial order, for each allowed segment. I could then 'construct' the final piecewise curve by walking through this large table - although I'm not entirely sure how to exactly do the final construct.
I'm wondering if this is not a 'known' type of problem for which algorithms already exist. Any comments welcome!
You can try a variant of the Ramer–Douglas–Peucker algorithm. It's an easy-to-implement algorithm for simplifying polygonal lines. In your context, the polygonal line is a sample of your filter curve at the grid points and the algorithm certifies that the maximal error is smaller than some threshold.
If you need a smooth curve you can modify the algorithm to implement a quadratic spline interpolation instead of a polyline approximation (which correspond to a linear spline interpolation) and similarly a cubic spline for second order continuity. In each iteration the farthest sample point is added to the interpolation point set and the interpolation spline is re-computed.
A slightly different alternative is to use a least-square approximating spline instead of an interpolation spline. A new knot will be added in each iteration at the grid point with farthest distance but the curve will not be required to pass through it.
This approach, while simple, answers most of your requirements and gives good results in practice.
However, it may not give the theoretical optimal solution (although I don't currently have a counter example).

Ordering CONCAVE polygon vertices in (counter)clockwise?

I have a set of disordered vertices that may form a concave polygon. Now I wish to order them in either clockwise or counterclockwise.
An answer here suggests the following steps:
Find the polygon center
Compute angles
Order points by angle
This is obviously only for convex polygon and will fail when the points form a concave one.
How may I do this to a concave one?
I am using Python, but welcome all generic answers.
In general, your problem seems ill-defined. For example, given the following set of vertices:
which of these non-convex polygons would you consider to be the "correct" way to connect them?
Now, obviously, there are various possible criteria that you could use to choose between different possible orders. For example, you might want to choose the ordering that minimizes the total length of the edges, which should yield fairly "reasonable" results if the points do, in fact, lie fairly close to each other on the boundary of a simple polygon:
Unfortunately, for a general set of points, finding the ordering that minimizes the total edge length turns out to be a well known NP-complete problem. That said, there are many heuristic algorithms that can usually find a nearly optimal solution quickly, even if they can't always guarantee that the solution they find is the true minimum.

Curve_fit not converging means...?

I need to crossmatch a list of astronomical coordinates with different catalogues, and I want to decide a maximum radius for the crossmatch. This will avoid mismatches between my list and the catalogues.
To do this, I compute the separation between the best match with the catalogue for each object in my list. My initial list is supossed to be the position of a known object, but it could happend that it is not detected in the catalog, and my coordinates may suffer from small offsets.
They way I am computing the maximum radius is by fitting the gaussian kernel density of the separation with a gaussian, and use the center + 3sigmas value. The method works nicely for most of the cases, but when a small subsample of my list has an offset, I have two gaussians instead. In these cases, I will specify the max radius in a different way.
My problem is that when this happens, curve_fit can't normally do the fit with one gaussian. For a scientific publication, I will need to justify the "no fit" in curve_fit, and in which cases the "different way" is used. Could someone give me a hand on what this means in mathematical terms?
There are varying lengths to which you can go justifying this or that fitting ansatz --- which strongly depends on the details of your specific case (eg: why do you expect a gaussian to work in a first place? to what depth you need/want to delve into why exactly a certain fitting procedure fails and what exactly is a fail etc).
If the question is really about the curve_fit and its failure to converge, then show us some code and some input data which demonstrate the problem.
If the question is about how to evaluate the goodness-of-fit, you're best off going back to the library and picking a good book on statistics.
If all you look for is way of justifying why in a certain case a gaussian is not a good fitting ansatz, one way would be to calculate the moments: for a gaussian distribution 1st, 2nd, 3rd and higher moments are related to each other in a very precise way. If you can demonstrate that for your underlying data the relation between moments is very different, it sounds reasonable that these data can't be fit by a gaussian.

Categories

Resources