Double integral in cartesian coordinate instead of (R,Theta) - python

my previous question (Integral of Intensity function in python)
you can see the diffraction model in image below:
I want to calculate integral of intensity in each pixel (square), so I can't use R and Theta as variable. How can I do this in X-Y coordinate.
Our function:
instead of sin(theta) we can use:
sintheta= (np.sqrt((x)**2 + (y)**2)/(np.sqrt((x)**2 + (y)**2 + d**2)))
Other constants:
lamb=550*10**(-9)
k=2.0*np.pi/lamb
a=5.5*2.54*10**(-2)
d=2.8
when you plot function, the result is something like this:(The image above is a view from top)
the method in previous topic:
calculate integrate of function in (0.0, dist) and after that * (2*np.pix) which x = ka*np.sin(theta), But now I want integrate in each pixel. which the previous method doesn't work, because this is X-Y coordinate , not polar.

Actually, the integration in Cartesian coordinates is rather straightforward. Now that you have the intensity function, you have to express the radius r by coordinates x and y. A trivial thing which you have actually done in your question.
So, the function to be integrated (without some constants) is:
from scipy import special as sp
# Fraunhofer intensity function (spherical aperture)
def f(x,y):
r = np.sqrt(x**2 + y**2)
return (sp.j1(r)/r)**2
Or by using the fact that 2 J1(x)/x = J0(x) + J2(x) [thanks, Jaime!]:
def f(x,y):
r = np.sqrt(x**2 + y**2)
return (sp.j0(r) + sp.jn(2,r))**2
This form is better in the sense that it does not have a singularity anywhere.
Now, I do not use any constant factors. You may add them if you want, but I find it easier to normalize with the integration result over infinite area. Otherwise it is too easy to just forget some constant (I do, usually).
The integration can be carried out with scipy.integrate.nquad. It accepts a multi-dimensional function to integrate. So, in this case:
import scipy.integrate
integral = scipy.integrate.nquad(f, ([-d/2, d/2], [-d/2, d/2]))[0]
However, as your function is very clearly symmetric, you might consider integrating over one quadrant only and then multiply by four:
4. * scipy.integrate.nquad(f, ([0, d/2], [0, d/2]))[0]
By using these the full intensity is:
>>> 4. * scipy.integrate.nquad(f, [[0,inf],[0,inf]])[0]
12.565472446489999
(Which looks very much like 4 pi, BTW.) Of course, you can also use the polar coordinates to calculate the full value, as the function has circular symmetry (as outlined in Integral of Intensity function in python). The different values are due to different scaling (2 pi omitted in the polar integration, 2 because I am using the sum form of the bessel functions here).
For example for a square area from -1..1 on both directions the normalized (divided by the above full power value) power over the square area is:
>>> 4*scipy.integrate.nquad(f, [[0,1],[0,1]])[0] / 12.565472446489999
0.27011854108867
So, approximately 27 % of the incoming light shines onto the square photodetector.
When it comes to your constants, it seems that something (at least units) is missing. My guess:
wavelength: 550 nm
circular aperture diameter: 0.0055" = 0.14 mm
distance from the aperture to the sensor: 2.8 mm
square sensor size 5.4 um x 5.4 um
The last one I just guessed from the image. As the sensor size is very much smaller than the distance, sin(ϴ) is very close to y/d, where d is distance and y displacement from the optical axis. By using these numbers x = ka sin(ϴ) = kay / d ≈ 1.54. For that number the intensity integral gives approximately 0.52 (or 52 %).
If you are comparing this to some experimental value, remember that there are numerous error sources. The image on the image plane is a fourier transform of the aperture. If there are small imperfections in the aperture edge, they may change the resulting spot. Airy rings are seldom as beautiful as astronomers think...
Just for fun:

Related

Check if coordinates are within a specific area

I used various sources of information to determine the GPS coordinates of a traffic sign, and plotted them using using plotly.express.scatter_mapbox and add_scattermapbox as follows:
The orange dot is a high end, "reference" measurement and the others are from different sources.
The numeric coordinates in this example are:
red: 51.4001213° 12.4291356°
purple: 51.400127° 12.429187°
green: 51.400106346232° 12.429278003005°
orange: 51.4000684461437° 12.4292323627949°
How can i calculate an area around the orange dot (e.g. 5 meter), find the coordinates which are inside this area and how do i plot that area on my map?
Does this answer: https://gis.stackexchange.com/a/25883
This is tricky for two reasons: first, limiting the points to a circle instead of a square; second, accounting for distortions in the distance calculations.
Many GISes include capabilities that automatically and transparently handle both complications. However, the tags here suggest that a GIS-independent description of an algorithm may be desirable.
To generate points uniformly, randomly, and independently within a circle of radius r around a location (x0, y0), start by generating two independent uniform random values u and v in the interval [0, 1). (This is what almost every random number generator provides you.) Compute
w = r * sqrt(u)
t = 2 * Pi * v
x = w * cos(t)
y = w * sin(t)
The desired random point is at location (x+x0, y+y0).
When using geographic (lat,lon) coordinates, then x0 (longitude) and y0 (latitude) will be in degrees but r will most likely be in meters (or feet or miles or some other linear measurement). First, convert the radius r into degrees as if you were located near the equator. Here, there are about 111,300 meters in a degree.
Second, after generating x and y as in step (1), adjust the x-coordinate for the shrinking of the east-west distances:
x' = x / cos(y0)
The desired random point is at location (x'+x0, y+y0). This is an approximate procedure. For small radii (less than a few hundred kilometers) that do not extend over either pole of the earth, it will usually be so accurate you cannot detect any error even when generating tens of thousands of random points around each center (x0,y0).

How to implement an interactive tone curve in Python?

I want to implement a photo editor in python using flask. So far, I managed to apply an s curve to a photo, like this:
import cv2
import numpy as np
image = cv2.imread('apple.jpg')
def sToneCurve(frame):
look_up_table = np.zeros((256, 1), dtype='uint8')
for i in range(256):
look_up_table[i][0] = 255 * (np.sin(np.pi * (i / 255 - 1 / 2)) + 1) / 2
return cv2.LUT(frame, look_up_table)
image_contrasted = sToneCurve(image)
cv2.imwrite('apple_dark.jpg', image_contrasted)
How could I implement an interactive tone curve, so that the user could select how he would like to edit the photos, like this: tone curve and not be a predefined formula applied to the photo, as in the code above. What would be the best approach, what libraries and visualizations for the curve plots to use?
You implement this using "standard" polynomial fitting: you have N points that you need a curve through, so you find the N-1st order polynomial that does that, then use that polynomial as your mapping function.
You're already using numpy, so use numpy.polynomial.polynomial.polyfit with:
x all your points' x coordinates, including your black and white points (which in a proper tone curve users should be able to move off of (0,0) and (1,1) respective),
y all your points' y coordinates,
deg if the polynomial has to pass through all points, which it should, this should be equal to len(x) - 1, as two points is a line, or a first degree polynomial, three points is a quadratic curve, or a second degree polynomial, etc. "The" polynomial through N points is an N-1 degree polynomial,
the rest of the args shouldn't particularly matter.
This gives you a numpy array of polynomial coefficients (let's call that array c) that you can then use for mapping: any pixel with lightness/intensity value i should get mapped to:
mapped = f(I) = c[0] * i**0 + c[1] * i**1 + c[2] * i**2 + ...
Which thankfully numpy can do for you by simply using the corresponding polyval function.
And of course, to make that fast, what you really want to do is build a LUT that you can just directly consult, every time the user changes a coordinate in the tone curve UI, so:
from numpy.polynomial.polynomial import polyfit, polyval
# How big of a LUT you actually need depends entirely
# on the bit depth you're working with, of course...
BIT_DEPTH = 2**16
TONE_LUT = range(0, BIT_DEPTH)
def update_from_tone_ui(coordinates):
"""
Called on user value update, with coordinates being
a list-of-lists a la [[0,0], [0.1,0.1], ...]
"""
x, y = zip(*coordinates)
coefficients = polyfit(x, y, len(x) - 1)
f = lamba i: clamp(polyval(i, coefficients), 0, 1)
# And remember to make sure the input range to f() matches
# the actual x/y domain that we used for the polyfit:
divisor = BIT_DEPTH - 1
TONE_LUT = [BIT_DEPTH * f(i/divisor) for i in range(0, BIT_DEPTH)]
with clamp coming from "somewhere", but if you don't already have one then it's trivially implemented with some shortcut returns:
def clamp(n, floor, ceiling):
if n < floor: return floor
if n > ceiling: return ceiling
return n
(And of course make sure to adjust your clamping values if you don't want your tone curve x and y coordinates in [0,1])
Now, rather than running the mapping function every time, you just directly look up the mapped value. Note that you get a bit of freedom in terms of precision: you could use a tone curve in which the x and y values run from 0 to 1, or you use have them run from 0 to whatever-bit-depth-you-use (28, 216, what have you) but whatever you use, make sure you scale your actual pixel intensities accordingly when you generate your LUT. Otherwise things will look really interesting.

Inverse FFT returns negative values when it should not

I have several points (x,y,z coordinates) in a 3D box with associated masses. I want to draw an histogram of the mass-density that is found in spheres of a given radius R.
I have written a code that, providing I did not make any errors which I think I may have, works in the following way:
My "real" data is something huge thus I wrote a little code to generate non overlapping points randomly with arbitrary mass in a box.
I compute a 3D histogram (weighted by mass) with a binning about 10 times smaller than the radius of my spheres.
I take the FFT of my histogram, compute the wave-modes (kx, ky and kz) and use them to multiply my histogram in Fourier space by the analytic expression of the 3D top-hat window (sphere filtering) function in Fourier space.
I inverse FFT my newly computed grid.
Thus drawing a 1D-histogram of the values on each bin would give me what I want.
My issue is the following: given what I do there should not be any negative values in my inverted FFT grid (step 4), but I get some, and with values much higher that the numerical error.
If I run my code on a small box (300x300x300 cm3 and the points of separated by at least 1 cm) I do not get the issue. I do get it for 600x600x600 cm3 though.
If I set all the masses to 0, thus working on an empty grid, I do get back my 0 without any noted issues.
I here give my code in a full block so that it is easily copied.
import numpy as np
import matplotlib.pyplot as plt
import random
from numba import njit
# 1. Generate a bunch of points with masses from 1 to 3 separated by a radius of 1 cm
radius = 1
rangeX = (0, 100)
rangeY = (0, 100)
rangeZ = (0, 100)
rangem = (1,3)
qty = 20000 # or however many points you want
# Generate a set of all points within 1 of the origin, to be used as offsets later
deltas = set()
for x in range(-radius, radius+1):
for y in range(-radius, radius+1):
for z in range(-radius, radius+1):
if x*x + y*y + z*z<= radius*radius:
deltas.add((x,y,z))
X = []
Y = []
Z = []
M = []
excluded = set()
for i in range(qty):
x = random.randrange(*rangeX)
y = random.randrange(*rangeY)
z = random.randrange(*rangeZ)
m = random.uniform(*rangem)
if (x,y,z) in excluded: continue
X.append(x)
Y.append(y)
Z.append(z)
M.append(m)
excluded.update((x+dx, y+dy, z+dz) for (dx,dy,dz) in deltas)
print("There is ",len(X)," points in the box")
# Compute the 3D histogram
a = np.vstack((X, Y, Z)).T
b = 200
H, edges = np.histogramdd(a, weights=M, bins = b)
# Compute the FFT of the grid
Fh = np.fft.fftn(H, axes=(-3,-2, -1))
# Compute the different wave-modes
kx = 2*np.pi*np.fft.fftfreq(len(edges[0][:-1]))*len(edges[0][:-1])/(np.amax(X)-np.amin(X))
ky = 2*np.pi*np.fft.fftfreq(len(edges[1][:-1]))*len(edges[1][:-1])/(np.amax(Y)-np.amin(Y))
kz = 2*np.pi*np.fft.fftfreq(len(edges[2][:-1]))*len(edges[2][:-1])/(np.amax(Z)-np.amin(Z))
# I create a matrix containing the values of the filter in each point of the grid in Fourier space
R = 5
Kh = np.empty((len(kx),len(ky),len(kz)))
#njit(parallel=True)
def func_njit(kx, ky, kz, Kh):
for i in range(len(kx)):
for j in range(len(ky)):
for k in range(len(kz)):
if np.sqrt(kx[i]**2+ky[j]**2+kz[k]**2) != 0:
Kh[i][j][k] = (np.sin((np.sqrt(kx[i]**2+ky[j]**2+kz[k]**2))*R)-(np.sqrt(kx[i]**2+ky[j]**2+kz[k]**2))*R*np.cos((np.sqrt(kx[i]**2+ky[j]**2+kz[k]**2))*R))*3/((np.sqrt(kx[i]**2+ky[j]**2+kz[k]**2))*R)**3
else:
Kh[i][j][k] = 1
return Kh
Kh = func_njit(kx, ky, kz, Kh)
# I multiply each point of my grid by the associated value of the filter (multiplication in Fourier space = convolution in real space)
Gh = np.multiply(Fh, Kh)
# I take the inverse FFT of my filtered grid. I take the real part to get back floats but there should only be zeros for the imaginary part.
Density = np.real(np.fft.ifftn(Gh,axes=(-3,-2, -1)))
# Here it shows if there are negative values the magnitude of the error
print(np.min(Density))
D = Density.flatten()
N = np.mean(D)
# I then compute the histogram I want
hist, bins = np.histogram(D/N, bins='auto', density=True)
bin_centers = (bins[1:]+bins[:-1])*0.5
plt.plot(bin_centers, hist)
plt.xlabel('rho/rhom')
plt.ylabel('P(rho)')
plt.show()
Do you know why I'm getting these negative values? Do you think there is a simpler way to proceed?
Sorry if this is a very long post, I tried to make it very clear and will edit it with your comments, thanks a lot!
-EDIT-
A follow-up question on the issue can be found [here].1
The filter you create in the frequency domain is only an approximation to the filter you want to create. The problem is that we are dealing with the DFT here, not the continuous-domain FT (with its infinite frequencies). The Fourier transform of a ball is indeed the function you describe, however this function is infinitely large -- it is not band-limited!
By sampling this function only within a window, you are effectively multiplying it with an ideal low-pass filter (the rectangle of the domain). This low-pass filter, in the spatial domain, has negative values. Therefore, the filter you create also has negative values in the spatial domain.
This is a slice through the origin of the inverse transform of Kh (after I applied fftshift to move the origin to the middle of the image, for better display):
As you can tell here, there is some ringing that leads to negative values.
One way to overcome this ringing is to apply a windowing function in the frequency domain. Another option is to generate a ball in the spatial domain, and compute its Fourier transform. This second option would be the simplest to achieve. Do remember that the kernel in the spatial domain must also have the origin at the top-left pixel to obtain a correct FFT.
A windowing function is typically applied in the spatial domain to avoid issues with the image border when computing the FFT. Here, I propose to apply such a window in the frequency domain to avoid similar issues when computing the IFFT. Note, however, that this will always further reduce the bandwidth of the kernel (the windowing function would work as a low-pass filter after all), and therefore yield a smoother transition of foreground to background in the spatial domain (i.e. the spatial domain kernel will not have as sharp a transition as you might like). The best known windowing functions are Hamming and Hann windows, but there are many others worth trying out.
Unsolicited advice:
I simplified your code to compute Kh to the following:
kr = np.sqrt(kx[:,None,None]**2 + ky[None,:,None]**2 + kz[None,None,:]**2)
kr *= R
Kh = (np.sin(kr)-kr*np.cos(kr))*3/(kr)**3
Kh[0,0,0] = 1
I find this easier to read than the nested loops. It should also be significantly faster, and avoid the need for njit. Note that you were computing the same distance (what I call kr here) 5 times. Factoring out such computation is not only faster, but yields more readable code.
Just a guess:
Where do you get the idea that the imaginary part MUST be zero? Have you ever tried to take the absolute values (sqrt(re^2 + im^2)) and forget about the phase instead of just taking the real part? Just something that came to my mind.

Two-body orbit modelling problems

Skip to Update 2 below, if you don't want to read too much background.
I'm trying to implement a model for simple orbital simulations (two body).
However, when I try to use the code I've written, the plots generated from the result look quite odd.
The program uses initial state vectors (position and velocity) to calculate the Keplerian orbital elements, which are used to then calculate the next position, and returned as the next two state vectors.
This seems to work fine, and by itself, plots correctly as long as I keep the plot on the orbital plane. But I would like to rotate the plot to the frame of reference (the parent body) so that I can see a cool 3D view of what the orbits look like (obvs).
Right now, I suspect that the bug is in how I convert from the two state vectors in the orbital plane, to rotating them to the frame of reference. I am using the equations from step 6 of this document to create the following code from (but applying individual roation matricies [copied from here]):
from numpy import sin, cos, matrix, newaxis, asarray, squeeze, dot
def Rx(theta):
"""
Return a rotation matrix for the X axis and angle *theta*
"""
return matrix([
[1, 0, 0 ],
[0, cos(theta), -sin(theta) ],
[0, sin(theta), cos(theta) ],
], dtype="float64")
def Rz(theta):
"""
Return a rotation matrix for the Z axis and angle *theta*
"""
return matrix([
[cos(theta), -sin(theta), 0],
[sin(theta), cos(theta), 0],
[0, 0, 1],
], dtype="float64")
def rotate1(vector, O, i, w):
# The starting value of *vector* is just a 1-dimensional numpy
# array.
# Transform into a column vector.
vector = vector[:, newaxis]
# Perform the rotation
R = Rz(-O) * Rx(-i) * Rz(-w)
res2 = dot(R, vector)
# Transform back into a row vector (because that's what
# the rest of the program uses)
return squeeze(asarray(res2))
(For context, this is the full class I am using for the orbit model.)
When I plot X and Y coordinates from the result, I get this:
But when I change the rotation matrix to R = Rz(-O) * Rx(-i), I get this more plausible plot (although obviously missing one rotation, and slightly off-center):
And when I reduce it further to R = Rx(-i), as one would expect, I get this:
So as I said, I am fairly sure that it is not the orbital calculation code that is behaving weirdly, but rather some error in the rotation code. But I'm not sure where to narrow this down, as I'm pretty new to both numpy and matrix math in general.
Update: Based on stochastic's answer I transposed the matricies (R = Rz(-O).T * Rx(-i).T * Rz(-w).T), but then got this plot:
which made me wonder if my conversion to screen coordinates was somehow wrong -- but it looks correct to me (and is the same code as the more-correct plots with less rotation) namely:
def recenter(v_position, viewport_width, viewport_height):
x, y, z = v_position
# the size of the viewport in meters
bounds = 20000000
# viewport_width is the screen pixels (800)
scale = viewport_width/bounds
# Perform the scaling operation
x *= scale
y *= scale
# recenter to screen X and Y measured from the top-left corner
# of the viewport
x += viewport_width/2
y = viewport_height/2 - y
# Cast to int, because we don't care about pixel fractions
return int(x), int(y)
Update 2
Although I have triple-checked my implementation of the equations, as well as the rotations with stochastic's help, I still can't get the orbits to come out right. They still appear basically the same as in the plots above.
Using data from the NASA Horizon's system, I set up an orbit with specific state vectors from the ISS (2457380.183935185 = A.D. 2015-Dec-23 16:24:52.0000 (TDB)), and checked them against the Kepler orbit elements for the same moment in time, which produces this result:
inclination :
0.900246137041
0.900246137041
true_anomaly :
0.11497063007
0.0982485984565
long_of_asc_node :
3.80727461492
3.80727461492
eccentricity :
0.000429082122137
0.000501850615905
semi_major_axis :
6778560.7037
6779057.01374
mean_anomaly :
0.114872215066
0.0981501816537
argument_of_periapsis :
0.843226618347
0.85994864996
The top values are my (calculated) values, and the bottom values are the NASA ones. Obviously some floating point precision error is to be expected, but the variations in mean_anomaly and true_anomaly did strike me as larger than I expected. (I'm currently running all of my numpy calculations using float128 numbers on a 64-bit system).
In addition, the resulting orbit still looks like the (quite) eccentric first plot, above (even though I know that this LEO ISS orbit is quite circular). So I'm a bit stumped as to what the source of the problem could be.
I believe you have at least two problems.
After looking more closely at the orbital simulation you are doing (see this additional document from the comments), I think the main problem is the initially-very-reasonable-but-yet-untrue assumption that the final plot should look like an ellipse. In general it will not, since an orbiting body will not necessarily stay in a single plane.
The other problem, I think, is that your rotation matrices are the transpose of what they should be, per the document you described (see below).
On transposed rotation matrices
The document you cited does not directly specify whether R_x and R_z should be right-handed rotations of the axes or of the vector they will multiply, though you can figure it out from equation 9 (or 10). It turns out that they should be right-handed rotations of the axes, not the vector. That means that they should be defined like this:
return matrix([
[1, 0, 0 ],
[0, cos(theta), sin(theta) ],
[0,-sin(theta), cos(theta) ],
], dtype="float64")
instead of like this:
return matrix([
[1, 0, 0 ],
[0, cos(theta),-sin(theta) ],
[0, sin(theta), cos(theta) ],
], dtype="float64")
I found this out by reproducing equation 9 by hand on paper.
In that equation, look at the first component of the vector r(t).
There are two terms: one with o_x in it and one with o_y.
Look at the thing multliplying o_y. It is: -(sin(omega)*cos(Omega)+cos(omega)*cos(i)*sin(Omega)).
That leading minus sign is the key. It comes from the minus sign in the first row of your Rz matrix.
Since the Omega, i, and omega in equation 9 are all negated, that means that the minus sign needs to be on the second row of R_z, which would mean that R_z represents a right-handed rotation of the axes, not the vector.
Similarly, we can look at the o_y component of the last term and see that the minus sign needs to be on the second row of R_x, meaning (thank goodness for sanity) the both R_z and R_x right-handed rotations of the axes.
Your Rx and Rz functions are currently defining right handed rotations of a vector, not the axes.
You can fix this by either (all three are equivalent):
Removing the minus signs on your euler angles: Rz(O) * Rx(i) * Rz(w)
transposing your rotation matrices: Rz(-O).T * Rx(-i).T * Rz(-w).T
moving the - sign in the definition of Rx and Rz to the second row sine term, as shown above
I am going to mark stochastic's answer as right, because a) he deserves the points for being so helpful, and b) his advice was fundamentally correct.
However the source of the weird plot actually ended up being these lines in the linked Orbit class:
self.v_position = self.rotate(v_position, self.long_of_asc_node, self.inclination, self.argument_of_periapsis)
self.v_velocity = self.rotate(v_velocity, self.long_of_asc_node, self.inclination, self.argument_of_periapsis)
Notice that the self.v_position property is updated before the call to rotate the velocity vector happens; one might also notice, when reading the code, that I in my cleverness decided to make all of the orbital element values methods wrapped in #property decorators to make the calculations more clear.
But of course, this also means the methods are called -- and the values recalculated -- every time a property was accessed. So the second call to self.rotate() happens with slightly different values of the orbital elements from the first call and, more importantly, with values that don't match up 100% correctly with the "current" position and velocity state vectors!
So after a few days of banging my head against this bug, I figured it out from a bit of yak-shaving I was doing in the form of a refactoring, and now it all works perfectly.

Approximating data with a multi segment cubic bezier curve and a distance as well as a curvature contraint

I have some geo data (the image below shows the path of a river as red dots) which I want to approximate using a multi segment cubic bezier curve. Through other questions on stackoverflow here and here I found the algorithm by Philip J. Schneider from "Graphics Gems". I successfully implemented it and can report that even with thousands of points it is very fast. Unfortunately that speed comes with some disadvantages, namely that the fitting is done quite sloppily. Consider the following graphic:
The red dots are my original data and the blue line is the multi segment bezier created by the algorithm by Schneider. As you can see, the input to the algorithm was a tolerance which is at least as high as the green line indicates. Nevertheless, the algorithm creates a bezier curve which has too many sharp turns. You see too of these unnecessary sharp turns in the image. It is easy to imagine a bezier curve with less sharp turns for the shown data while still maintaining the maximum tolerance condition (just push the bezier curve a bit into the direction of the magenta arrows). The problem seems to be that the algorithm picks data points from my original data as end points of the individual bezier curves (the magenta arrows point indicate some suspects). With the endpoints of the bezier curves restricted like that, it is clear that the algorithm will sometimes produce rather sharp curvatures.
What I am looking for is an algorithm which approximates my data with a multi segment bezier curve with two constraints:
the multi segment bezier curve must never be more than a certain distance away from the data points (this is provided by the algorithm by Schneider)
the multi segment bezier curve must never create curvatures that are too sharp. One way to check for this criteria would be to roll a circle with the minimum curvature radius along the multisegment bezier curve and check whether it touches all parts of the curve along its path. Though it seems there is a better method involving the cross product of the first and second derivative
The solutions I found which create better fits sadly either work only for single bezier curves (and omit the question of how to find good start and end points for each bezier curve in the multi segment bezier curve) or do not allow a minimum curvature contraint. I feel that the minimum curvature contraint is the tricky condition here.
Here another example (this is hand drawn and not 100% precise):
Lets suppose that figure one shows both, the curvature constraint (the circle must fit along the whole curve) as well as the maximum distance of any data point from the curve (which happens to be the radius of the circle in green). A successful approximation of the red path in figure two is shown in blue. That approximation honors the curvature condition (the circle can roll inside the whole curve and touches it everywhere) as well as the distance condition (shown in green). Figure three shows a different approximation to the path. While it honors the distance condition it is clear that the circle does not fit into the curvature any more. Figure four shows a path which is impossible to be approximated with the given constraints because it is too pointy. This example is supposed to illustrate that to properly approximate some pointy turns in the path, it is necessary that the algorithm chooses control points which are not part of the path. Figure three shows that if control points along the path were chosen, the curvature constraint cannot be fulfilled anymore. This example also shows that the algorithm must quit on some inputs as it is not possible to approximate it with the given constraints.
Does there exist a solution to this problem? The solution does not have to be fast. If it takes a day to process 1000 points, then that's fine. The solution does also not have to be optimal in the sense that it must result in a least squares fit.
In the end I will implement this in C and Python but I can read most other languages too.
I found the solution that fulfills my criterea. The solution is to first find a B-Spline that approximates the points in the least square sense and then convert that spline into a multi segment bezier curve. B-Splines do have the advantage that in contrast to bezier curves they will not pass through the control points as well as providing a way to specify a desired "smoothness" of the approximation curve. The needed functionality to generate such a spline is implemented in the FITPACK library to which scipy offers a python binding. Lets suppose I read my data into the lists x and y, then I can do:
import matplotlib.pyplot as plt
import numpy as np
from scipy import interpolate
tck,u = interpolate.splprep([x,y],s=3)
unew = np.arange(0,1.01,0.01)
out = interpolate.splev(unew,tck)
plt.figure()
plt.plot(x,y,out[0],out[1])
plt.show()
The result then looks like this:
If I want the curve more smooth, then I can increase the s parameter to splprep. If I want the approximation closer to the data I can decrease the s parameter for less smoothness. By going through multiple s parameters programatically I can find a good parameter that fits the given requirements.
The question though is how to convert that result into a bezier curve. The answer in this email by Zachary Pincus. I will replicate his solution here to give a complete answer to my question:
def b_spline_to_bezier_series(tck, per = False):
"""Convert a parametric b-spline into a sequence of Bezier curves of the same degree.
Inputs:
tck : (t,c,k) tuple of b-spline knots, coefficients, and degree returned by splprep.
per : if tck was created as a periodic spline, per *must* be true, else per *must* be false.
Output:
A list of Bezier curves of degree k that is equivalent to the input spline.
Each Bezier curve is an array of shape (k+1,d) where d is the dimension of the
space; thus the curve includes the starting point, the k-1 internal control
points, and the endpoint, where each point is of d dimensions.
"""
from fitpack import insert
from numpy import asarray, unique, split, sum
t,c,k = tck
t = asarray(t)
try:
c[0][0]
except:
# I can't figure out a simple way to convert nonparametric splines to
# parametric splines. Oh well.
raise TypeError("Only parametric b-splines are supported.")
new_tck = tck
if per:
# ignore the leading and trailing k knots that exist to enforce periodicity
knots_to_consider = unique(t[k:-k])
else:
# the first and last k+1 knots are identical in the non-periodic case, so
# no need to consider them when increasing the knot multiplicities below
knots_to_consider = unique(t[k+1:-k-1])
# For each unique knot, bring it's multiplicity up to the next multiple of k+1
# This removes all continuity constraints between each of the original knots,
# creating a set of independent Bezier curves.
desired_multiplicity = k+1
for x in knots_to_consider:
current_multiplicity = sum(t == x)
remainder = current_multiplicity%desired_multiplicity
if remainder != 0:
# add enough knots to bring the current multiplicity up to the desired multiplicity
number_to_insert = desired_multiplicity - remainder
new_tck = insert(x, new_tck, number_to_insert, per)
tt,cc,kk = new_tck
# strip off the last k+1 knots, as they are redundant after knot insertion
bezier_points = numpy.transpose(cc)[:-desired_multiplicity]
if per:
# again, ignore the leading and trailing k knots
bezier_points = bezier_points[k:-k]
# group the points into the desired bezier curves
return split(bezier_points, len(bezier_points) / desired_multiplicity, axis = 0)
So B-Splines, FITPACK, numpy and scipy saved my day :)
polygonize data
find the order of points so you just find the closest points to each other and try them to connect 'by lines'. Avoid to loop back to origin point
compute derivation along path
it is the change of direction of the 'lines' where you hit local min or max there is your control point ... Do this to reduce your input data (leave just control points).
curve
now use these points as control points. I strongly recommend interpolation polynomial for both x and y separately for example something like this:
x=a0+a1*t+a2*t*t+a3*t*t*t
y=b0+b1*t+b2*t*t+b3*t*t*t
where a0..a3 are computed like this:
d1=0.5*(p2.x-p0.x);
d2=0.5*(p3.x-p1.x);
a0=p1.x;
a1=d1;
a2=(3.0*(p2.x-p1.x))-(2.0*d1)-d2;
a3=d1+d2+(2.0*(-p2.x+p1.x));
b0 .. b3 are computed in same way but use y coordinates of course
p0..p3 are control points for cubic interpolation curve
t =<0.0,1.0> is curve parameter from p1 to p2
this ensures that position and first derivation is continuous (c1) and also you can use BEZIER but it will not be as good match as this.
[edit1] too sharp edges is a BIG problem
To solve it you can remove points from your dataset before obtaining the control points. I can think of two ways to do it right now ... choose what is better for you
remove points from dataset with too high first derivation
dx/dl or dy/dl where x,y are coordinates and l is curve length (along its path). The exact computation of curvature radius from curve derivation is tricky
remove points from dataset that leads to too small curvature radius
compute intersection of neighboring line segments (black lines) midpoint. Perpendicular axises like on image (red lines) the distance of it and the join point (blue line) is your curvature radius. When the curvature radius is smaller then your limit remove that point ...
now if you really need only BEZIER cubics then you can convert my interpolation cubic to BEZIER cubic like this:
// ---------------------------------------------------------------------------
// x=cx[0]+(t*cx[1])+(tt*cx[2])+(ttt*cx[3]); // cubic x=f(t), t = <0,1>
// ---------------------------------------------------------------------------
// cubic matrix bz4 = it4
// ---------------------------------------------------------------------------
// cx[0]= ( x0) = ( X1)
// cx[1]= (3.0*x1)-(3.0*x0) = (0.5*X2) -(0.5*X0)
// cx[2]= (3.0*x2)-(6.0*x1)+(3.0*x0) = -(0.5*X3)+(2.0*X2)-(2.5*X1)+( X0)
// cx[3]= ( x3)-(3.0*x2)+(3.0*x1)-( x0) = (0.5*X3)-(1.5*X2)+(1.5*X1)-(0.5*X0)
// ---------------------------------------------------------------------------
const double m=1.0/6.0;
double x0,y0,x1,y1,x2,y2,x3,y3;
x0 = X1; y0 = Y1;
x1 = X1-(X0-X2)*m; y1 = Y1-(Y0-Y2)*m;
x2 = X2+(X1-X3)*m; y2 = Y2+(Y1-Y3)*m;
x3 = X2; y3 = Y2;
In case you need the reverse conversion see:
Bezier curve with control points within the curve
The question was posted long ago, but here is a simple solution based on splprep, finding the minimal value of s allowing to fulfill a minimum curvature radius criteria.
route is the set of input points, the first dimension being the number of points.
import numpy as np
from scipy.interpolate import splprep, splev
#The minimum curvature radius we want to enforce
minCurvatureConstraint = 2000
#Relative tolerance on the radius
relTol = 1.e-6
#Initial values for bisection search, should bound the solution
s_0 = 0
minCurvature_0 = 0
s_1 = 100000000 #Should be high enough to produce curvature radius larger than constraint
s_1 *= 2
minCurvature_1 = np.float('inf')
while np.abs(minCurvature_0 - minCurvature_1)>minCurvatureConstraint*relTol:
s = 0.5 * (s_0 + s_1)
tck, u = splprep(np.transpose(route), s=s)
smoothed_route = splev(u, tck)
#Compute radius of curvature
derivative1 = splev(u, tck, der=1)
derivative2 = splev(u, tck, der=2)
xprim = derivative1[0]
xprimprim = derivative2[0]
yprim = derivative1[1]
yprimprim = derivative2[1]
curvature = 1.0 / np.abs((xprim*yprimprim - yprim* xprimprim) / np.power(xprim*xprim + yprim*yprim, 3 / 2))
minCurvature = np.min(curvature)
print("s is %g => Minimum curvature radius is %g"%(s,np.min(curvature)))
#Perform bisection
if minCurvature > minCurvatureConstraint:
s_1 = s
minCurvature_1 = minCurvature
else:
s_0 = s
minCurvature_0 = minCurvature
It may require some refinements such as iterations to find a suitable s_1, but works.

Categories

Resources