I have a large quantity of pixel colors (96 thousands different colors):
And I want to get some kind of a mathematically-defined probability region like in this question:
The main obstacle I see right now – all methods on Google are mainly about visualisations and about two-dimensional spaces, yet there is no algorithm for finding coefficients of an equation like:
a1x2 + b1y2 + c1y2 + a2xy + b2xz + c2yz + a3x + b3y + c3z = 0
And this paper is too difficult for me to implement it in python. :(
Anyway, what I just want is to determine if some pixel is more-or-less lies within the diapason I have.
I tried making it using scikit clustering, but I failed due to having only one
set of data, probably. And creating an array 2563 elements
representing each pixel color seems a wrong way.
I wonder if there is an easy way to determine boundaries of this point cluster?
Or, maybe I'm just overthinking it and there is something like OpenCV
cv2.inRange() function?
this can be solved by optimization and fitting of the ellipsoid polynomial. However I would start with geometrical approach which is much faster:
find avg point position
that will be the center of your ellipsoid
p0 = sum (p[i]) / n // average
i = { 0,1,2,3,...,n-1 } // of all points
If your point density is not homogenuous then it is safer to use bounding box center instead. So find xmin,ymin,zmin,xmax,ymax,zmax and the middle between them is your center.
find most distant point to center
that will give you main semi axis
pa = p[j];
|p[j]-p0| >= |p[i]-p0| // max
i = { 0,1,2,3,...,n-1 } // of all points
find second semi-axises
so vector pa-p0 is normal to plane in which the other semi-axises should be. So find most distant point to p0 from that plane:
pb = p[j];
|p[j]-p0| >= |p[i]-p0| // max
dot(pa-p0,p[j]-p0) == 0 // but inly if inside plane
i = { 0,1,2,3,...,n-1 } // from all points
beware that the result of dot product may not be precisely zero so it is better to test against something like this:
|dot(pa-p0,p[j]-p0)| <= 1e-3
You can use any threshold you want (should be based on the ellipsoid size).
find last semi-axis
So we know that last semi-axis should be perpendicular to both
(pa-p0) AND (pb-p0)
So find point such that:
pc = p[j];
|p[j]-p0| >= |p[i]-p0| // max
dot(pa-p0,p[j]-p0) == 0 // but inly if inside plane
dot(pb-p0,p[j]-p0) == 0 // and perpendicular also to b semi-axis
i = { 0,1,2,3,...,n-1 } // from all points
Ellipsoid
Now you have all the parameters you need to form your ellipsoid. vectors
(pa-p0),(pb-p0),(pc-p0)
are the basis vectors of your ellipsoid (you can make them perpendicular by using cross product). Their size gives you the radiuses. And p0 is the center. You can also use this parametric equation:
a=pa-p0;
b=pb-p0;
c=pc-p0;
p(u,v) = p0 + a*cos(u)*cos(v)
+ b*cos(u)*sin(v)
+ c*sin(u);
u = < -0.5*PI , +0.5*PI >
v = < 0.0 , 2.0*PI >
This whole process is just O(n) and the results can be used as start point for both optimization and fitting to speed them up without the loss of accuracy. If you want to further improve accuracy See:
How approximation search works
The sub links shows you examples of fitting ...
You can also take a look at this:
Algorithms: Ellipse matching
which is basically similar to your task but only in 2D still may bring you some ideas.
Here is unstrict solution with fast and simple random search approach*. Best side - no heavy linear algebra library required**. Seems it worked fine for mesh collision detection.
Is assumes that ellipsoid center matches cloud center and then uses some sort of mirrored average to search for main axis.
Full working code is slightly bigger and placed on git, idea of main axis search is here:
np.random.shuffle(pts)
pts_len = len(pts)
pt_average = np.sum(pts, axis = 0) / pts_len
vec_major = pt_average * 0
minor_max, major_max = 0, 0
# may be improved with overlapped pass,
for pt_cur in pts:
vec_cur = pt_cur - pt_average
proj_len, rej_len = proj_length(vec_cur, vec_major)
if proj_len < 0:
vec_cur = -vec_cur
vec_major += (vec_cur - vec_major) / pts_len
major_max = max(major_max, abs(proj_len))
minor_max = max(minor_max, rej_len)
It can be improved/optimized even more at some points. Examples what it will produce:
And full experiment code with plots
*i.e. adjusting code lines randomly until they work
**was actually reason to figure out this solution
Related
I have to work on task of using hand keypoints as pointer (or touchless mouse).
The main problem here is that the (deep learning) hand keypoints are not perfect (sometime under varied of light, skin colors), therefore the chosen key point are scattering, not smoothly moving like the real mouse we use.
How can I smooth them online (in real-time). Not the solution given array of 2D points and then we smooth on this array. This is the case of new point get in one by one and we have to correct them immediately! to avoid user suffer the scattering mouse.
I'm using opencv and python. Please be nice since I'm new to Computer Vision.
Thanks
The simplest way is to use a moving average. You can compute, very efficiently, the average position of the last n steps and use that to "smooth" the trajectory:
n = 5 # the average "window size"
counter = 0 # count how many steps so far
avg = 0. # the average
while True:
# every time step
val = get_keypoint_value_for_this_time_step()
counter += 1
coeff = 1. / min(counter, n)
# update using moving average
avg = coeff * val + (1. - coeff) * avg
print(f'Current time step={val} smothed={avg}')
More variants of moving averages can be found here.
Since you want a physics like behavior, you can use a simple physics model. Note all arrays below describe properties of the current state of your dynamics and therefore have the same shape (1, 2).
define forces
a:attraction = (k - p) * scaler
velocity
v:velocity = v + a
positions
p:current position = p + v
k:new dl key point = whatever your dl outputs
You output p to the user. Note, If you want a more natural motion, you can play with the scaler or add additional forces (like a) to v.
Also, to get all points, concatenate p to ps where ps.shape = (n, 2).
I am trying to pack hard-spheres in a unit cubical box, such that these spheres cannot overlap on each other. This is being done in Python.
I am given some packing fraction f, and the number of spheres in the system is N.
So, I say that the diameter of each sphere will be
d = (p*6/(math.pi*N)**)1/3).
My box has periodic boundary conditions - which means that there is a recurring image of my box in all direction. If there is a particle who is at the edge of the box and has a portion of it going beyond the wall, it will stick out at the other side.
My attempt:
Create a numpy N-by-3 array box which holds the position vector of each particle [x,y,z]
The first particle is fine as it is.
The next particle in the array is checked with all the previous particles. If the distance between them is more than d, move on to the next particle. If they overlap, randomly change the position vector of the particle in question. If the new position does not overlap with the previous atoms, accept it.
Repeat steps 2-3 for the next particle.
I am trying to populate my box with these hard spheres, in the following manner:
for i in range(1,N):
mybool=True
print("particles in box: " + str(i))
while (mybool): #the deal with this while loop is that if we place a bad particle, we need to change its position, and restart the process of checking
for j in range(0,i):
displacement=box[j,:]-box[i,:]
for k in range(3):
if abs(displacement[k])>L/2:
displacement[k] -= L*np.sign(displacement[k])
distance = np.linalg.norm(displacement,2) #check distance between ith particle and the trailing j particles
if distance<diameter:
box[i,:] = np.random.uniform(0,1,(1,3)) #change the position of the ith particle randomly, restart the process
break
if j==i-1 and distance>diameter:
mybool = False
break
The problem with this code is that if p=0.45, it is taking a really, really long time to converge. Is there a better method to solve this problem, more efficiently?
I think what you are looking for is either the hexagonal closed-packed (HCP or sometime called face-centered cubic, FCC) lattice or the cubic closed-packed one (CCP). See e.g. Wikipedia on Close-packing of equal spheres.
Since your space has periodic conditions, I believe it doesn't matter which one you chose (hcp or ccp), and they both achieve the same density of ~74.04%, which was proved by Gauss to be the highest density by lattice packing.
Update:
For the follow-up question on how to generate efficiently one such lattice, let's take as an example the HCP lattice. First, let's create a bunch of (i, j, k) indices [(0,0,0), (1,0,0), (2,0,0), ..., (0,1,0), ...]. Then, get xyz coordinates from those indices and return a DataFrame with them:
def hcp(n):
dim = 3
k, j, i = [v.flatten()
for v in np.meshgrid(*([range(n)] * dim), indexing='ij')]
df = pd.DataFrame({
'x': 2 * i + (j + k) % 2,
'y': np.sqrt(3) * (j + 1/3 * (k % 2)),
'z': 2 * np.sqrt(6) / 3 * k,
})
return df
We can plot the result as scatter3d using plotly for interactive exploration:
import plotly.graph_objects as go
df = hcp(12)
fig = go.Figure(data=go.Scatter3d(
x=df.x, y=df.y, z=df.z, mode='markers',
marker=dict(size=df.x*0 + 30, symbol="circle", color=-df.z, opacity=1),
))
fig.show()
Note: plotly's scatter3d is not a very good rendering of spheres: the marker sizes are constant (so when you zoom in and out, the "spheres" will appear to change relative size), and there is no shading, limited z-ordering faithfulness, etc., but it's convenient to interact with the plot.
Resize and clip to the unit box:
Here, a strict clipping (each sphere needs to be completely inside the unit box). Your "periodic boundary condition" is something you will need to address separately (see further below for ideas).
def hcp_unitbox(r):
n = int(np.ceil(1 / (np.sqrt(3) * r)))
df = hcp(n) * r
df += r
df = df[(df <= 1 - r).all(axis=1)]
return df
With this, you find that a radius of 0.06 gives you 608 fully enclosed spheres:
hcp_unitbox(.06).shape # (608, 3)
Where you would go next:
You may dig deeper into the effect of your so-called "periodic boundary conditions", and perhaps play with some rotations (and small translations).
To do so, you may try to generate an HCP-lattice that is large enough that any rotation will still fully enclose your unit cube. For example:
r = 0.2 # example
n = int(np.ceil(2 / r))
df = hcp(n) * r - 1
Then rotate it (by any amount) and translate it (by up to 1 radius in any direction) as you wish for your research, and clip. The "periodic boundary conditions", as you call them, present a bit of extra challenge, as the clipping becomes trickier. First, clip any sphere whose center is outside your box. Then select spheres close enough to the boundaries, or even partition the regions of interest into overlapping regions along the walls of your cube, then check for collisions among the spheres (as per your periodic boundary conditions) that fall in each such region.
I have a set of approximately 10,000 vectors max (random directions) in 3d space and I'm looking for a new direction v_dev (vector) which deviates from all other directions in the set by e.g. a minimum of 5 degrees. My naive initial try is the following, which has of course bad runtime complexity but succeeds for some cases.
#!/usr/bin/env python
import numpy as np
numVecs = 10000
vecs = np.random.rand(numVecs, 3)
randVec = np.random.rand(1, 3)
notFound=True
foundVec=randVec
below=False
iter = 1
for vec in vecs:
angle = np.rad2deg(np.arccos(np.vdot(vec, foundVec)/(np.linalg.norm(vec) * np.linalg.norm(foundVec))))
print("angle: %f\n" % angle)
while notFound:
for vec in vecs:
angle = np.rad2deg(np.arccos(np.vdot(vec, randVec)/(np.linalg.norm(vec) * np.linalg.norm(randVec))))
if angle < 5:
below=True
if below:
randVec = np.random.rand(1, 3)
else:
notFound=False
print("iteration no. %i" % iter)
iter = iter + 1
Any hints how to approach this problem (language agnostic) would be appreciate.
Consider the vectors in a spherical coordinate system (u,w,r), where r is always 1 because vector length doesn't matter here. Any vector can be expressed as (u,w) and the "deadzone" around each vector x, in which the target vector t cannot fall, can be expressed as dist((u_x, w_x, 1), (u_x-u_t, w_x-w_t, 1)) < 5°. However calculating this distance can be a bit tricky, so converting back into cartesian coordinates might be easier. These deadzones are circular on the spherical shell around the origin and you're looking for a t that doesn't hit any on them.
For any fixed u_t you can iterate over all x and using the distance function can find the start and end point of a range of w_t, that are blocked because they fall into the deadzone of the vector x. The union of all 10000 ranges build the possible values of w_t for that given u_t. The same can be done for any fixed w_t, looking for a u_t.
Now comes the part that I'm not entirely sure of: Given that you have two unknows u_t and w_t and 20000 knowns, the system is just a tad overdetermined and if there's a solution, it should be possible to find it.
My suggestion: Set u_t fixed to a random value and check which w_t are possible. If you find a non-empty range, great, you're done. If all w_t are blocked, select a different u_t and try again. Now, selecting u_t at random will work eventually, yet a smarter iteration should be possible. Maybe u_t(n) = u_t(n-1)*phi % 360°, where phi is the golden ratio. That way the u_t never repeat and will cover the whole space with finer and finer granularity instead of starting from one end and going slowly to the other.
Edit: You might also have more luck on the mathematics stackexchange since this isn't so much a code question as it is a mathematics question. For example I'm not sure what I wrote is all that rigorous, so I don't even know it works.
One way would be two build a 2d manifold (area on the sphere) of forbidden areas. You start by adding a point, then, the forbidden area is a circle on the sphere surface.
While true, pick a point on the boundary of the area. If this is not close (within 5 degrees) to any other vector, then, you're done, return it. If not, you just found a new circle of forbidden area. Add it to your manifold of forbidden area. You'll need to chop the circle in line or arc segments and build the boundary as a list.
If the set of vector has no solution, you boundary will collapse to an empty point. Then you return failure.
It's not the easiest approach, and you'll have to deal with the boundaries of a complex shape over a sphere. But it's guaranteed to work and should have reasonable complexity.
I am trying to create a Voronoi diagram using a 3d array representing voxels where the coordinates are represented by the indices of the array.
For example:
points = np.random.random([10,3])
vox_len = 0.1
lx = ly = lz = 11
hull_space=np.zeros([lx,ly,lz],dtype=np.int16)
hull_space.fill(-1)
for i in range(lx):
for j in range(ly):
for k in range(lz):
coord = np.array([i,j,k]).astype(float)*vox_len
diff = points - coord
dist = np.sqrt(diff[:,0]**2 + diff[:,1]**2 + diff[:,2]**2)
closest = np.argmin(dist)
hull_space[i][j][k]=closest
The problem is that with larger arrays the code is very slow. I've also tried:
grid = np.indices((lx,ly,lz)).astype(float)*vox_len
hull_space = np.ones([lx,ly,lz],dtype=np.int16)
hull_space.fill(-1)
closest_dist = np.ones(hull_space.shape)
closest_dist.fill(999)
for index,point in enumerate(points):
dist2 = (grid[0]-point[0])**2 + (grid[1]-point[1])**2 + (grid[2]-point[2])**2
hull_space[dist2 < closest_dist]=index
closest_dist[dist2 < closest_dist]=dist2[dist2 < closest_dist]
But results were even worse. I am aware of scipy's voronoi diagram methods but I need to build a voxel image in order to do distance transforms of the edges of the voronoi regions. My goal is to create an image of an entangled fibrous structure to use in a pore network model and the voxel approach is useful for getting pore volumes.
Any help making the algorithm run faster would be appreciated. For a 300x300x300 diagram with 500 points it takes about 50 minutes on my machine
You can try to sort the points. Treat the co-ordinate as a binary and interleave it. Then sort it both ways. In fact Cgal uses the same algorithm.
I have some geo data (the image below shows the path of a river as red dots) which I want to approximate using a multi segment cubic bezier curve. Through other questions on stackoverflow here and here I found the algorithm by Philip J. Schneider from "Graphics Gems". I successfully implemented it and can report that even with thousands of points it is very fast. Unfortunately that speed comes with some disadvantages, namely that the fitting is done quite sloppily. Consider the following graphic:
The red dots are my original data and the blue line is the multi segment bezier created by the algorithm by Schneider. As you can see, the input to the algorithm was a tolerance which is at least as high as the green line indicates. Nevertheless, the algorithm creates a bezier curve which has too many sharp turns. You see too of these unnecessary sharp turns in the image. It is easy to imagine a bezier curve with less sharp turns for the shown data while still maintaining the maximum tolerance condition (just push the bezier curve a bit into the direction of the magenta arrows). The problem seems to be that the algorithm picks data points from my original data as end points of the individual bezier curves (the magenta arrows point indicate some suspects). With the endpoints of the bezier curves restricted like that, it is clear that the algorithm will sometimes produce rather sharp curvatures.
What I am looking for is an algorithm which approximates my data with a multi segment bezier curve with two constraints:
the multi segment bezier curve must never be more than a certain distance away from the data points (this is provided by the algorithm by Schneider)
the multi segment bezier curve must never create curvatures that are too sharp. One way to check for this criteria would be to roll a circle with the minimum curvature radius along the multisegment bezier curve and check whether it touches all parts of the curve along its path. Though it seems there is a better method involving the cross product of the first and second derivative
The solutions I found which create better fits sadly either work only for single bezier curves (and omit the question of how to find good start and end points for each bezier curve in the multi segment bezier curve) or do not allow a minimum curvature contraint. I feel that the minimum curvature contraint is the tricky condition here.
Here another example (this is hand drawn and not 100% precise):
Lets suppose that figure one shows both, the curvature constraint (the circle must fit along the whole curve) as well as the maximum distance of any data point from the curve (which happens to be the radius of the circle in green). A successful approximation of the red path in figure two is shown in blue. That approximation honors the curvature condition (the circle can roll inside the whole curve and touches it everywhere) as well as the distance condition (shown in green). Figure three shows a different approximation to the path. While it honors the distance condition it is clear that the circle does not fit into the curvature any more. Figure four shows a path which is impossible to be approximated with the given constraints because it is too pointy. This example is supposed to illustrate that to properly approximate some pointy turns in the path, it is necessary that the algorithm chooses control points which are not part of the path. Figure three shows that if control points along the path were chosen, the curvature constraint cannot be fulfilled anymore. This example also shows that the algorithm must quit on some inputs as it is not possible to approximate it with the given constraints.
Does there exist a solution to this problem? The solution does not have to be fast. If it takes a day to process 1000 points, then that's fine. The solution does also not have to be optimal in the sense that it must result in a least squares fit.
In the end I will implement this in C and Python but I can read most other languages too.
I found the solution that fulfills my criterea. The solution is to first find a B-Spline that approximates the points in the least square sense and then convert that spline into a multi segment bezier curve. B-Splines do have the advantage that in contrast to bezier curves they will not pass through the control points as well as providing a way to specify a desired "smoothness" of the approximation curve. The needed functionality to generate such a spline is implemented in the FITPACK library to which scipy offers a python binding. Lets suppose I read my data into the lists x and y, then I can do:
import matplotlib.pyplot as plt
import numpy as np
from scipy import interpolate
tck,u = interpolate.splprep([x,y],s=3)
unew = np.arange(0,1.01,0.01)
out = interpolate.splev(unew,tck)
plt.figure()
plt.plot(x,y,out[0],out[1])
plt.show()
The result then looks like this:
If I want the curve more smooth, then I can increase the s parameter to splprep. If I want the approximation closer to the data I can decrease the s parameter for less smoothness. By going through multiple s parameters programatically I can find a good parameter that fits the given requirements.
The question though is how to convert that result into a bezier curve. The answer in this email by Zachary Pincus. I will replicate his solution here to give a complete answer to my question:
def b_spline_to_bezier_series(tck, per = False):
"""Convert a parametric b-spline into a sequence of Bezier curves of the same degree.
Inputs:
tck : (t,c,k) tuple of b-spline knots, coefficients, and degree returned by splprep.
per : if tck was created as a periodic spline, per *must* be true, else per *must* be false.
Output:
A list of Bezier curves of degree k that is equivalent to the input spline.
Each Bezier curve is an array of shape (k+1,d) where d is the dimension of the
space; thus the curve includes the starting point, the k-1 internal control
points, and the endpoint, where each point is of d dimensions.
"""
from fitpack import insert
from numpy import asarray, unique, split, sum
t,c,k = tck
t = asarray(t)
try:
c[0][0]
except:
# I can't figure out a simple way to convert nonparametric splines to
# parametric splines. Oh well.
raise TypeError("Only parametric b-splines are supported.")
new_tck = tck
if per:
# ignore the leading and trailing k knots that exist to enforce periodicity
knots_to_consider = unique(t[k:-k])
else:
# the first and last k+1 knots are identical in the non-periodic case, so
# no need to consider them when increasing the knot multiplicities below
knots_to_consider = unique(t[k+1:-k-1])
# For each unique knot, bring it's multiplicity up to the next multiple of k+1
# This removes all continuity constraints between each of the original knots,
# creating a set of independent Bezier curves.
desired_multiplicity = k+1
for x in knots_to_consider:
current_multiplicity = sum(t == x)
remainder = current_multiplicity%desired_multiplicity
if remainder != 0:
# add enough knots to bring the current multiplicity up to the desired multiplicity
number_to_insert = desired_multiplicity - remainder
new_tck = insert(x, new_tck, number_to_insert, per)
tt,cc,kk = new_tck
# strip off the last k+1 knots, as they are redundant after knot insertion
bezier_points = numpy.transpose(cc)[:-desired_multiplicity]
if per:
# again, ignore the leading and trailing k knots
bezier_points = bezier_points[k:-k]
# group the points into the desired bezier curves
return split(bezier_points, len(bezier_points) / desired_multiplicity, axis = 0)
So B-Splines, FITPACK, numpy and scipy saved my day :)
polygonize data
find the order of points so you just find the closest points to each other and try them to connect 'by lines'. Avoid to loop back to origin point
compute derivation along path
it is the change of direction of the 'lines' where you hit local min or max there is your control point ... Do this to reduce your input data (leave just control points).
curve
now use these points as control points. I strongly recommend interpolation polynomial for both x and y separately for example something like this:
x=a0+a1*t+a2*t*t+a3*t*t*t
y=b0+b1*t+b2*t*t+b3*t*t*t
where a0..a3 are computed like this:
d1=0.5*(p2.x-p0.x);
d2=0.5*(p3.x-p1.x);
a0=p1.x;
a1=d1;
a2=(3.0*(p2.x-p1.x))-(2.0*d1)-d2;
a3=d1+d2+(2.0*(-p2.x+p1.x));
b0 .. b3 are computed in same way but use y coordinates of course
p0..p3 are control points for cubic interpolation curve
t =<0.0,1.0> is curve parameter from p1 to p2
this ensures that position and first derivation is continuous (c1) and also you can use BEZIER but it will not be as good match as this.
[edit1] too sharp edges is a BIG problem
To solve it you can remove points from your dataset before obtaining the control points. I can think of two ways to do it right now ... choose what is better for you
remove points from dataset with too high first derivation
dx/dl or dy/dl where x,y are coordinates and l is curve length (along its path). The exact computation of curvature radius from curve derivation is tricky
remove points from dataset that leads to too small curvature radius
compute intersection of neighboring line segments (black lines) midpoint. Perpendicular axises like on image (red lines) the distance of it and the join point (blue line) is your curvature radius. When the curvature radius is smaller then your limit remove that point ...
now if you really need only BEZIER cubics then you can convert my interpolation cubic to BEZIER cubic like this:
// ---------------------------------------------------------------------------
// x=cx[0]+(t*cx[1])+(tt*cx[2])+(ttt*cx[3]); // cubic x=f(t), t = <0,1>
// ---------------------------------------------------------------------------
// cubic matrix bz4 = it4
// ---------------------------------------------------------------------------
// cx[0]= ( x0) = ( X1)
// cx[1]= (3.0*x1)-(3.0*x0) = (0.5*X2) -(0.5*X0)
// cx[2]= (3.0*x2)-(6.0*x1)+(3.0*x0) = -(0.5*X3)+(2.0*X2)-(2.5*X1)+( X0)
// cx[3]= ( x3)-(3.0*x2)+(3.0*x1)-( x0) = (0.5*X3)-(1.5*X2)+(1.5*X1)-(0.5*X0)
// ---------------------------------------------------------------------------
const double m=1.0/6.0;
double x0,y0,x1,y1,x2,y2,x3,y3;
x0 = X1; y0 = Y1;
x1 = X1-(X0-X2)*m; y1 = Y1-(Y0-Y2)*m;
x2 = X2+(X1-X3)*m; y2 = Y2+(Y1-Y3)*m;
x3 = X2; y3 = Y2;
In case you need the reverse conversion see:
Bezier curve with control points within the curve
The question was posted long ago, but here is a simple solution based on splprep, finding the minimal value of s allowing to fulfill a minimum curvature radius criteria.
route is the set of input points, the first dimension being the number of points.
import numpy as np
from scipy.interpolate import splprep, splev
#The minimum curvature radius we want to enforce
minCurvatureConstraint = 2000
#Relative tolerance on the radius
relTol = 1.e-6
#Initial values for bisection search, should bound the solution
s_0 = 0
minCurvature_0 = 0
s_1 = 100000000 #Should be high enough to produce curvature radius larger than constraint
s_1 *= 2
minCurvature_1 = np.float('inf')
while np.abs(minCurvature_0 - minCurvature_1)>minCurvatureConstraint*relTol:
s = 0.5 * (s_0 + s_1)
tck, u = splprep(np.transpose(route), s=s)
smoothed_route = splev(u, tck)
#Compute radius of curvature
derivative1 = splev(u, tck, der=1)
derivative2 = splev(u, tck, der=2)
xprim = derivative1[0]
xprimprim = derivative2[0]
yprim = derivative1[1]
yprimprim = derivative2[1]
curvature = 1.0 / np.abs((xprim*yprimprim - yprim* xprimprim) / np.power(xprim*xprim + yprim*yprim, 3 / 2))
minCurvature = np.min(curvature)
print("s is %g => Minimum curvature radius is %g"%(s,np.min(curvature)))
#Perform bisection
if minCurvature > minCurvatureConstraint:
s_1 = s
minCurvature_1 = minCurvature
else:
s_0 = s
minCurvature_0 = minCurvature
It may require some refinements such as iterations to find a suitable s_1, but works.