Related
I have several points on the unit sphere that are distributed according to the algorithm described in https://www.cmu.edu/biolphys/deserno/pdf/sphere_equi.pdf (and implemented in the code below). On each of these points, I have a value that in my particular case represents 1 minus a small error. The errors are in [0, 0.1] if this is important, so my values are in [0.9, 1].
Sadly, computing the errors is a costly process and I cannot do this for as many points as I want. Still, I want my plots to look like I am plotting something "continuous".
So I want to fit an interpolation function to my data, to be able to sample as many points as I want.
After a little bit of research I found scipy.interpolate.SmoothSphereBivariateSpline which seems to do exactly what I want. But I cannot make it work properly.
Question: what can I use to interpolate (spline, linear interpolation, anything would be fine for the moment) my data on the unit sphere? An answer can be either "you misused scipy.interpolation, here is the correct way to do this" or "this other function is better suited to your problem".
Sample code that should be executable with numpy and scipy installed:
import typing as ty
import numpy
import scipy.interpolate
def get_equidistant_points(N: int) -> ty.List[numpy.ndarray]:
"""Generate approximately n points evenly distributed accros the 3-d sphere.
This function tries to find approximately n points (might be a little less
or more) that are evenly distributed accros the 3-dimensional unit sphere.
The algorithm used is described in
https://www.cmu.edu/biolphys/deserno/pdf/sphere_equi.pdf.
"""
# Unit sphere
r = 1
points: ty.List[numpy.ndarray] = list()
a = 4 * numpy.pi * r ** 2 / N
d = numpy.sqrt(a)
m_v = int(numpy.round(numpy.pi / d))
d_v = numpy.pi / m_v
d_phi = a / d_v
for m in range(m_v):
v = numpy.pi * (m + 0.5) / m_v
m_phi = int(numpy.round(2 * numpy.pi * numpy.sin(v) / d_phi))
for n in range(m_phi):
phi = 2 * numpy.pi * n / m_phi
points.append(
numpy.array(
[
numpy.sin(v) * numpy.cos(phi),
numpy.sin(v) * numpy.sin(phi),
numpy.cos(v),
]
)
)
return points
def cartesian2spherical(x: float, y: float, z: float) -> numpy.ndarray:
r = numpy.linalg.norm([x, y, z])
theta = numpy.arccos(z / r)
phi = numpy.arctan2(y, x)
return numpy.array([r, theta, phi])
n = 100
points = get_equidistant_points(n)
# Random here, but costly in real life.
errors = numpy.random.rand(len(points)) / 10
# Change everything to spherical to use the interpolator from scipy.
ideal_spherical_points = numpy.array([cartesian2spherical(*point) for point in points])
r_interp = 1 - errors
theta_interp = ideal_spherical_points[:, 1]
phi_interp = ideal_spherical_points[:, 2]
# Change phi coordinate from [-pi, pi] to [0, 2pi] to please scipy.
phi_interp[phi_interp < 0] += 2 * numpy.pi
# Create the interpolator.
interpolator = scipy.interpolate.SmoothSphereBivariateSpline(
theta_interp, phi_interp, r_interp
)
# Creating the finer theta and phi values for the final plot
theta = numpy.linspace(0, numpy.pi, 100, endpoint=True)
phi = numpy.linspace(0, numpy.pi * 2, 100, endpoint=True)
# Creating the coordinate grid for the unit sphere.
X = numpy.outer(numpy.sin(theta), numpy.cos(phi))
Y = numpy.outer(numpy.sin(theta), numpy.sin(phi))
Z = numpy.outer(numpy.cos(theta), numpy.ones(100))
thetas, phis = numpy.meshgrid(theta, phi)
heatmap = interpolator(thetas, phis)
Issue with the code above:
With the code as-is, I have a
ValueError: The required storage space exceeds the available storage space: nxest or nyest too small, or s too small. The weighted least-squares spline corresponds to the current set of knots.
that is raised when initialising the interpolator instance.
The issue above seems to say that I should change the value of s that is one on the parameters of scipy.interpolate.SmoothSphereBivariateSpline. I tested different values of s ranging from 0.0001 to 100000, the code above always raise, either the exception described above or:
ValueError: Error code returned by bispev: 10
Edit: I am including my findings here. They can't really be considered as a solution, that is why I am editing and not posting as an answer.
With more research I found this question Using Radial Basis Functions to Interpolate a Function on a Sphere. The author has exactly the same problem as me and use a different interpolator: scipy.interpolate.Rbf. I changed the above code by replacing the interpolator and plotting:
# Create the interpolator.
interpolator = scipy.interpolate.Rbf(theta_interp, phi_interp, r_interp)
# Creating the finer theta and phi values for the final plot
plot_points = 100
theta = numpy.linspace(0, numpy.pi, plot_points, endpoint=True)
phi = numpy.linspace(0, numpy.pi * 2, plot_points, endpoint=True)
# Creating the coordinate grid for the unit sphere.
X = numpy.outer(numpy.sin(theta), numpy.cos(phi))
Y = numpy.outer(numpy.sin(theta), numpy.sin(phi))
Z = numpy.outer(numpy.cos(theta), numpy.ones(plot_points))
thetas, phis = numpy.meshgrid(theta, phi)
heatmap = interpolator(thetas, phis)
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib import cm
colormap = cm.inferno
normaliser = mpl.colors.Normalize(vmin=numpy.min(heatmap), vmax=1)
scalar_mappable = cm.ScalarMappable(cmap=colormap, norm=normaliser)
scalar_mappable.set_array([])
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
ax.plot_surface(
X,
Y,
Z,
facecolors=colormap(normaliser(heatmap)),
alpha=0.7,
cmap=colormap,
)
plt.colorbar(scalar_mappable)
plt.show()
This code runs smoothly and gives the following result:
The interpolation seems OK except on one line that is discontinuous, just like in the question that led me to this class. One of the answer give the idea of using a different distance, more adapted the the spherical coordinates: the Haversine distance.
def haversine(x1, x2):
theta1, phi1 = x1
theta2, phi2 = x2
return 2 * numpy.arcsin(
numpy.sqrt(
numpy.sin((theta2 - theta1) / 2) ** 2
+ numpy.cos(theta1) * numpy.cos(theta2) * numpy.sin((phi2 - phi1) / 2) ** 2
)
)
# Create the interpolator.
interpolator = scipy.interpolate.Rbf(theta_interp, phi_interp, r_interp, norm=haversine)
which, when executed, gives a warning:
LinAlgWarning: Ill-conditioned matrix (rcond=1.33262e-19): result may not be accurate.
self.nodes = linalg.solve(self.A, self.di)
and a result that is not at all the one expected: the interpolated function have values that may go up to -1 which is clearly wrong.
You can use Cartesian coordinate instead of Spherical coordinate.
The default norm parameter ('euclidean') used by Rbf is sufficient
# interpolation
x, y, z = numpy.array(points).T
interpolator = scipy.interpolate.Rbf(x, y, z, r_interp)
# predict
heatmap = interpolator(X, Y, Z)
Here the result:
ax.plot_surface(
X, Y, Z,
rstride=1, cstride=1,
# or rcount=50, ccount=50,
facecolors=colormap(normaliser(heatmap)),
cmap=colormap,
alpha=0.7, shade=False
)
ax.set_xlabel('x axis')
ax.set_ylabel('y axis')
ax.set_zlabel('z axis')
You can also use a cosine distance if you want (norm parameter):
def cosine(XA, XB):
if XA.ndim == 1:
XA = numpy.expand_dims(XA, axis=0)
if XB.ndim == 1:
XB = numpy.expand_dims(XB, axis=0)
return scipy.spatial.distance.cosine(XA, XB)
In order to better see the differences,
I stacked the two images, substracted them and inverted the layer.
I'm trying to plot the angle vs. time plot for the output angle of a four-bar linkage (angle fi4 in the image below). This angle is calculated using the solution from the https://scholar.cu.edu.eg/?q=anis/files/week04-mdp206-position_analysis-draft.pdf, page 23.
I'm now trying to plot the fi_4(t) plot and am getting some strange results. The diagram displays the input angle fi2 as blue and output angle fi4 as red. Why is the fi2 fluctuating over time? Shouldn't the fi4 have some sort of sine curve?
Am I missing something here?
Four-bar linkage:
The code:
from __future__ import division
import math
import numpy as np
import matplotlib.pyplot as plt
# Input
#lengths of links (tube testing machine actual lengths)
a = 45.5 #mm
b = 250 #mm
c = 140 #mm
d = 244.244 #mm
# Solution for fi2 being a time function, f(time) = angle
f = 16.7/60 #/s
omega = 2 * np.pi * f #rad/s
t = np.linspace(0, 50, 100)
y = a * np.sin(omega * t)
x = a * np.cos(omega * t)
fi2 = np.arctan(y/x)
# Solution of the vector loop equation
#https://scholar.cu.edu.eg/?q=anis/files/week04-mdp206-position_analysis-draft.pdf
K1 = d/a
K2 = d/c
K3 = (a**2 - b**2 + c**2 + d**2)/(2*a*c)
A = np.cos(fi2) - K1 - K2*np.cos(fi2) + K3
B = -2*np.sin(fi2)
C = K1 - (K2+1)*np.cos(fi2) + K3
fi4_1 = 2*np.arctan((-B+np.sqrt(B**2 - 4*A*C))/(2*A))
fi4_2 = 2*np.arctan((-B-np.sqrt(B**2 - 4*A*C))/(2*A))
# Plot the fi2 time diagram and fi4 time diagram
plt.plot(t, np.degrees(fi2), color = 'blue')
plt.plot(t, np.degrees(fi4_2), color = 'red')
plt.show()
Diagram:
The linespace(0, 50, 100) is too fast. Replacing it with:
t = np.linspace(0, 5, 100)
Second, all the calculations involving the bare np.arctan() are incorrect. You should use np.arctan2(y, x), which determines the correct quadrant (unlike anything based on y/x where the respective signs of x and y are lost). So:
fi2 = np.arctan2(y, x) # not: np.arctan(y/x)
...
fi4_1 = 2 * np.arctan2(-B + np.sqrt(B**2 - 4*A*C), 2*A)
fi4_2 = 2 * np.arctan2(-B - np.sqrt(B**2 - 4*A*C), 2*A)
Putting some labels on your plots and showing both solutions for θ_4:
plt.plot(t, np.degrees(fi2) % 360, color = 'k', label=r'$θ_2$')
plt.plot(t, np.degrees(fi4_1) % 360, color = 'b', label=r'$θ_{4_1}$')
plt.plot(t, np.degrees(fi4_2) % 360, color = 'r', label=r'$θ_{4_2}$')
plt.xlabel('t [s]')
plt.ylabel('degrees')
plt.legend()
plt.show()
With these mods, we get:
BTW, do you want to see an amazingly lazy way of solving problems like these? Much more inefficient than your code, but much easier to derive (e.g. for other structures) without trying to express the closed form of your solution:
from scipy.optimize import fsolve
def polar(r, theta):
return r * np.array((np.cos(theta), np.sin(theta)))
def f(th34, th2):
th3, th4 = th34 # solve simultaneously for theta_3 and theta_4
pb_23 = polar(a, th2) + polar(b, th3) # point B based on links a, b
pb_14 = polar(d, 0) + polar(c, th4) # point B based on links d, c
return pb_23 - pb_14 # error: difference of the two
def solve(th2):
th4_1 = np.array([fsolve(f, [0, -1.5], args=(th2_k,))[1] for th2_k in th2])
th4_2 = np.array([fsolve(f, [0, 1.5], args=(th2_k,))[1] for th2_k in th2])
return th4_1, th4_2
Application:
t = np.linspace(0, 5, 100)
th2 = omega * t
th4_1, th4_2 = solve(th2)
twopi = 2 * np.pi
np.allclose(th4_1 % twopi, fi4_1 % twopi)
# True
np.allclose(th4_2 % twopi, fi4_2 % twopi)
# True
Depending on the structure of your mechanism (e.g. 5 links), you may have more than two solutions, and of course more angles, so you'd have to adapt the code above. But you get the idea.
Be warned: fsolve iterates to find a suitable (close enough) solution, so as I said, it is much slower than your closed form.
Update (some clarification/explanation):
The function f computes the position of the point B in two different ways (via R2-R3 and via R1-R4) and returns the difference (as a vector). We solve for the difference to be zero.
That function takes two arguments: one 2-dimensional variable (th34, which is an array [th3, th4]) and one parameter th2; the parameter is constant during one run of fsolve.
The values [0, -1.5] and [0, 1.5] are initialization values (guesses) for th34 (th3 and th4). We call fsolve twice to get the two possible solutions.
All angles refer to your figure. I use th for θ (theta, not phi), but I kept along the original fi4_1 and fi4_2 for comparison.
Modulo 2*pi, th4_1 should be equal to fi4_1 etc., which is tested by np.allclose to account for numerical rounding errors.
Imagine someone jumping off a balcony under a certain angle theta and velocity v0 (the height of the balcony is denoted as ystar). Looking at this problem in 2D and considering drag you get a system of differential equations which can be solved with a Runge-Kutta method (I choose explicit-midpoint, not sure what the butcher tableu for this one is). I implemented this and it works perfectly fine, for some given initial conditions I get the trajectory of the moving particle.
My problem is that I want to fix two of the initial conditions (starting point on the x-axis is zero and on the y-axis is ystar) and make sure that the trajectory goes trough a certain point on the x-axis (let's call it xstar). For this of course exist multiple combinations of the other two initial conditions, which in this case are the velocities in the x- and y-direction. The problem is that I don't know how to implement that.
The code that I used to solve the problem up to this point:
1) Implementation of the Runge-Kutta method
import numpy as np
import matplotlib.pyplot as plt
def integrate(methode_step, rhs, y0, T, N):
star = (int(N+1),y0.size)
y= np.empty(star)
t0, dt = 0, 1.* T/N
y[0,...] = y0
for i in range(0,int(N)):
y[i+1,...]=methode_step(rhs,y[i,...], t0+i*dt, dt)
t = np.arange(N+1) * dt
return t,y
def explicit_midpoint_step(rhs, y0, t0, dt):
return y0 + dt * rhs(t0+0.5*dt,y0+0.5*dt*rhs(t0,y0))
def explicit_midpoint(rhs,y0,T,N):
return integrate(explicit_midpoint_step,rhs,y0,T,N)
2) Implementation of the right-hand-side of the differential equation and the nessecery parameters
A = 1.9/2.
cw = 0.78
rho = 1.293
g = 9.81
# Mass and referece length
l = 1.95
m = 118
# Position
xstar = 8*l
ystar = 4*l
def rhs(t,y):
lam = cw * A * rho /(2 * m)
return np.array([y[1],-lam*y[1]*np.sqrt(y[1]**2+y[3]**2),y[3],-lam*y[3]*np.sqrt(y[1]**2+y[3]**2)-g])
3) solving the problem with it
# Parametrize the two dimensional velocity with an angle theta and speed v0
v0 = 30
theta = np.pi/6
v0x = v0 * np.cos(theta)
v0y = v0 * np.sin(theta)
# Initial condintions
z0 = np.array([0, v0x, ystar, v0y])
# Calculate solution
t, z = explicit_midpoint(rhs, z0, 5, 1000)
4) Visualization
plt.figure()
plt.plot(0,ystar,"ro")
plt.plot(x,0,"ro")
plt.plot(z[:,0],z[:,1])
plt.grid(True)
plt.xlabel(r"$x$")
plt.ylabel(r"$y$")
plt.show()
To make the question concrete: With this set up in mind, how do I find all possible combinations of v0 and theta such that z[some_element,0]==xstar
I tried of course some things, mainly the brute force method of fixing theta and then trying out all the possible velocities (in an intervall that makes sense) but finally didn't know how to compare the resulting arrays with the desired result...
Since this is mainly a coding issue I hope stack overflow is the right place to ask for help...
EDIT:
As requested here is my try to solve the problem (replacing 3) and 4) from above)..
theta = np.pi/4.
xy = np.zeros((50,1001,2))
z1 = np.zeros((1001,2))
count=0
for v0 in range(0,50):
v0x = v0 * np.cos(theta)
v0y = v0 * np.sin(theta)
z0 = np.array([0, v0x, ystar, v0y])
# Calculate solution
t, z = explicit_midpoint(rhs, z0, 5, 1000)
if np.around(z[:,0],3).any() == round(xstar,3):
z1[:,0] = z[:,0]
z1[:,1] = z[:,2]
break
else:
xy[count,:,0] = z[:,0]
xy[count,:,1] = z[:,2]
count+=1
plt.figure()
plt.plot(0,ystar,"ro")
plt.plot(xstar,0,"ro")
for k in range(0,50):
plt.plot(xy[k,:,0],xy[k,:,1])
plt.plot(z[:,0],z[:,1])
plt.grid(True)
plt.xlabel(r"$x$")
plt.ylabel(r"$y$")
plt.show()
I'm sure that I'm using the .any() function wrong, the idea there is to round the values of z[:,0] to three digits and than compare them to xstar, if it matches the loop should terminate and retrun the current z, if not it should save it in another array and then increase v0.
Edit 2018-07-16
Here I post a corrected answer taking into account the drag by air.
Below is a python script to compute the set of (v0,theta) values so that the air-dragged trajectory passes through (x,y) = (xstar,0) at some time t=tstar. I used the trajectory without air-drag as the initial guess and also to guess the dependence of x(tstar) on v0 for the first refinement. The number of iterations needed to arrive at the correct v0 was typically 3 to 4. The script finished in 0.99 seconds on my laptop, including the time for generating figures.
The script generates two figures and one text file.
fig_xdrop_v0_theta.png
The black dots indicates the solution set (v0,theta)
The yellow line indicates the reference (v0,theta) which would be a solution if there were no air drag.
fig_traj_sample.png
Checking that the trajectory (blue solid line) passes through (x,y)=(xstar,0) when (v0,theta) is sampled from the solution set.
The black dashed line shows a trajectory without drag by air as a reference.
output.dat
contains the numerical data of (v0,theta) as well as the landing time tstar and number of iteration needed to find v0.
Here begins script.
#!/usr/bin/env python3
import numpy as np
import scipy.integrate
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.image as img
mpl.rcParams['lines.linewidth'] = 2
mpl.rcParams['lines.markeredgewidth'] = 1.0
mpl.rcParams['axes.formatter.limits'] = (-4,4)
#mpl.rcParams['axes.formatter.limits'] = (-2,2)
mpl.rcParams['axes.labelsize'] = 'large'
mpl.rcParams['xtick.labelsize'] = 'large'
mpl.rcParams['ytick.labelsize'] = 'large'
mpl.rcParams['xtick.direction'] = 'out'
mpl.rcParams['ytick.direction'] = 'out'
############################################
len_ref = 1.95
xstar = 8.0*len_ref
ystar = 4.0*len_ref
g_earth = 9.81
#
mass = 118
area = 1.9/2.
cw = 0.78
rho = 1.293
lam = cw * area * rho /(2.0 * mass)
############################################
ngtheta=51
theta_min = -0.1*np.pi
theta_max = 0.4*np.pi
theta_grid = np.linspace(theta_min, theta_max, ngtheta)
#
ngv0=100
v0min =6.0
v0max =18.0
v0_grid=np.linspace(v0min, v0max, ngv0)
# .. this grid is used for the initial coarse scan by reference trajecotry
############################################
outf=open('output.dat','w')
print('data file generated: output.dat')
###########################################
def calc_tstar_ref_and_x_ref_at_tstar_ref(v0, theta, ystar, g_earth):
'''return the drop time t* and drop point x(t*) of a reference trajectory
without air drag.
'''
vx = v0*np.cos(theta)
vy = v0*np.sin(theta)
ts_ref = (vy+np.sqrt(vy**2+2.0*g_earth*ystar))/g_earth
x_ref = vx*ts_ref
return (ts_ref, x_ref)
def rhs_drag(yvec, time, g_eath, lamb):
'''
dx/dt = v_x
dy/dt = v_y
du_x/dt = -lambda v_x sqrt(u_x^2 + u_y^2)
du_y/dt = -lambda v_y sqrt(u_x^2 + u_y^2) -g
yvec[0] .. x
yvec[1] .. y
yvec[2] .. v_x
yvec[3] .. v_y
'''
vnorm = (yvec[2]**2+yvec[3]**2)**0.5
return [ yvec[2], yvec[3], -lamb*yvec[2]*vnorm, -lamb*yvec[3]*vnorm -g_earth]
def try_tstar_drag(v0, theta, ystar, g_earth, lamb, tstar_search_grid):
'''one trial run to find the drop point x(t*), y(t*) of a trajectory
under the air drag.
'''
tinit=0.0
tgrid = [tinit]+list(tstar_search_grid)
yvec_list = scipy.integrate.odeint(rhs_drag,
[0.0, ystar, v0*np.cos(theta), v0*np.sin(theta)],
tgrid, args=(g_earth, lam))
y_drag = [yvec[1] for yvec in yvec_list]
x_drag = [yvec[0] for yvec in yvec_list]
if y_drag[0]<0.0:
ierr=-1
jtstar=0
tstar_braket=None
elif y_drag[-1]>0.0:
ierr=1
jtstar=len(y_drag)-1
tstar_braket=None
else:
ierr=0
for jt in range(len(y_drag)-1):
if y_drag[jt+1]*y_drag[jt]<=0.0:
tstar_braket=[tgrid[jt],tgrid[jt+1]]
if abs(y_drag[jt+1])<abs(y_drag[jt]):
jtstar = jt+1
else:
jtstar = jt
break
tstar_est = tgrid[jtstar]
x_drag_at_tstar_est = x_drag[jtstar]
y_drag_at_tstar_est = y_drag[jtstar]
return (tstar_est, x_drag_at_tstar_est, y_drag_at_tstar_est, ierr, tstar_braket)
def calc_x_drag_at_tstar(v0, theta, ystar, g_earth, lamb, tstar_est,
eps_y=1.0e-3, ngt_search=20,
rel_range_lower=0.8, rel_range_upper=1.2,
num_try=5):
'''compute the dop point x(t*) of a trajectory under the air drag.
'''
flg_success=False
tstar_est_lower=tstar_est*rel_range_lower
tstar_est_upper=tstar_est*rel_range_upper
for jtry in range(num_try):
tstar_search_grid = np.linspace(tstar_est_lower, tstar_est_upper, ngt_search)
tstar_est, x_drag_at_tstar_est, y_drag_at_tstar_est, ierr, tstar_braket \
= try_tstar_drag(v0, theta, ystar, g_earth, lamb, tstar_search_grid)
if ierr==-1:
tstar_est_upper = tstar_est_lower
tstar_est_lower = tstar_est_lower*rel_range_lower
elif ierr==1:
tstar_est_lower = tstar_est_upper
tstar_est_upper = tstar_est_upper*rel_range_upper
else:
if abs(y_drag_at_tstar_est)<eps_y:
flg_success=True
break
else:
tstar_est_lower=tstar_braket[0]
tstar_est_upper=tstar_braket[1]
return (tstar_est, x_drag_at_tstar_est, y_drag_at_tstar_est, flg_success)
def find_v0(xstar, v0_est, theta, ystar, g_earth, lamb, tstar_est,
eps_x=1.0e-3, num_try=6):
'''solve for v0 so that x(t*)==x*.
'''
flg_success=False
v0_hist=[]
x_drag_at_tstar_hist=[]
jtry_end=None
for jtry in range(num_try):
tstar_est, x_drag_at_tstar_est, y_drag_at_tstar_est, flg_success_x_drag \
= calc_x_drag_at_tstar(v0_est, theta, ystar, g_earth, lamb, tstar_est)
v0_hist.append(v0_est)
x_drag_at_tstar_hist.append(x_drag_at_tstar_est)
if not flg_success_x_drag:
break
elif abs(x_drag_at_tstar_est-xstar)<eps_x:
flg_success=True
jtry_end=jtry
break
else:
# adjust v0
# better if tstar_est is also adjusted, but maybe that is too much.
if len(v0_hist)<2:
# This is the first run. Use the analytical expression of
# dx(tstar)/dv0 of the refernece trajectory
dx = xstar - x_drag_at_tstar_est
dv0 = dx/(tstar_est*np.cos(theta))
v0_est += dv0
else:
# use linear interpolation
v0_est = v0_hist[-2] \
+ (v0_hist[-1]-v0_hist[-2]) \
*(xstar -x_drag_at_tstar_hist[-2])\
/(x_drag_at_tstar_hist[-1]-x_drag_at_tstar_hist[-2])
return (v0_est, tstar_est, flg_success, jtry_end)
# make a reference table of t* and x(t*) of a trajectory without air drag
# as a function of v0 and theta.
tstar_ref=np.empty((ngtheta,ngv0))
xdrop_ref=np.empty((ngtheta,ngv0))
for j1 in range(ngtheta):
for j2 in range(ngv0):
tt, xx = calc_tstar_ref_and_x_ref_at_tstar_ref(v0_grid[j2], theta_grid[j1], ystar, g_earth)
tstar_ref[j1,j2] = tt
xdrop_ref[j1,j2] = xx
# make an estimate of v0 and t* of a dragged trajectory for each theta
# based on the reference trajectroy's landing position xdrop_ref.
tstar_est=np.empty((ngtheta,))
v0_est=np.empty((ngtheta,))
v0_est[:]=-1.0
# .. null value
for j1 in range(ngtheta):
for j2 in range(ngv0-1):
if (xdrop_ref[j1,j2+1]-xstar)*(xdrop_ref[j1,j2]-xstar)<=0.0:
tstar_est[j1] = tstar_ref[j1,j2]
# .. lazy
v0_est[j1] \
= v0_grid[j2] \
+ (v0_grid[j2+1]-v0_grid[j2])\
*(xstar-xdrop_ref[j1,j2])/(xdrop_ref[j1,j2+1]-xdrop_ref[j1,j2])
# .. linear interpolation
break
print('compute v0 for each theta under air drag..')
# compute v0 for each theta under air drag
theta_sol_list=[]
tstar_sol_list=[]
v0_sol_list=[]
outf.write('# theta v0 tstar numiter_v0\n')
for j1 in range(ngtheta):
if v0_est[j1]>0.0:
v0, tstar, flg_success, jtry_end \
= find_v0(xstar, v0_est[j1], theta_grid[j1], ystar, g_earth, lam, tstar_est[j1])
if flg_success:
theta_sol_list.append(theta_grid[j1])
v0_sol_list.append(v0)
tstar_sol_list.append(tstar)
outf.write('%26.16e %26.16e %26.16e %10i\n'
%(theta_grid[j1], v0, tstar, jtry_end+1))
theta_sol = np.array(theta_sol_list)
v0_sol = np.array(v0_sol_list)
tstar_sol = np.array(tstar_sol_list)
### Check a sample
jsample=np.size(v0_sol)//3
theta_sol_sample= theta_sol[jsample]
v0_sol_sample = v0_sol[jsample]
tstar_sol_sample= tstar_sol[jsample]
ngt_chk = 50
tgrid = np.linspace(0.0, tstar_sol_sample, ngt_chk)
yvec_list = scipy.integrate.odeint(rhs_drag,
[0.0, ystar,
v0_sol_sample*np.cos(theta_sol_sample),
v0_sol_sample*np.sin(theta_sol_sample)],
tgrid, args=(g_earth, lam))
y_drag_sol_sample = [yvec[1] for yvec in yvec_list]
x_drag_sol_sample = [yvec[0] for yvec in yvec_list]
# compute also the trajectory without drag starting form the same initial
# condiiton by setting lambda=0.
yvec_list = scipy.integrate.odeint(rhs_drag,
[0.0, ystar,
v0_sol_sample*np.cos(theta_sol_sample),
v0_sol_sample*np.sin(theta_sol_sample)],
tgrid, args=(g_earth, 0.0))
y_ref_sample = [yvec[1] for yvec in yvec_list]
x_ref_sample = [yvec[0] for yvec in yvec_list]
#######################################################################
# canvas setting
#######################################################################
f_size = (8,5)
#
a1_left = 0.15
a1_bottom = 0.15
a1_width = 0.65
a1_height = 0.80
#
hspace=0.02
#
ac_left = a1_left+a1_width+hspace
ac_bottom = a1_bottom
ac_width = 0.03
ac_height = a1_height
###########################################
############################################
# plot
############################################
#------------------------------------------------
print('plotting the solution..')
fig1=plt.figure(figsize=f_size)
ax1 =plt.axes([a1_left, a1_bottom, a1_width, a1_height], axisbg='w')
im1=img.NonUniformImage(ax1,
interpolation='bilinear', \
cmap=mpl.cm.Blues, \
norm=mpl.colors.Normalize(vmin=0.0,
vmax=np.max(xdrop_ref), clip=True))
im1.set_data(v0_grid, theta_grid/np.pi, xdrop_ref )
ax1.images.append(im1)
plt.contour(v0_grid, theta_grid/np.pi, xdrop_ref, [xstar], colors='y')
plt.plot(v0_sol, theta_sol/np.pi, 'ok', lw=4, label='Init Cond with Drag')
plt.legend(loc='lower left')
plt.xlabel(r'Initial Velocity $v_0$', fontsize=18)
plt.ylabel(r'Angle of Projection $\theta/\pi$', fontsize=18)
plt.yticks([-0.50, -0.25, 0.0, 0.25, 0.50])
ax1.set_xlim([v0min, v0max])
ax1.set_ylim([theta_min/np.pi, theta_max/np.pi])
axc =plt.axes([ac_left, ac_bottom, ac_width, ac_height], axisbg='w')
mpl.colorbar.Colorbar(axc,im1)
axc.set_ylabel('Distance from Blacony without Drag')
# 'Distance from Blacony $x(t^*)$'
plt.savefig('fig_xdrop_v0_theta.png')
print('figure file genereated: fig_xdrop_v0_theta.png')
plt.close()
#------------------------------------------------
print('plotting a sample trajectory..')
fig1=plt.figure(figsize=f_size)
ax1 =plt.axes([a1_left, a1_bottom, a1_width, a1_height], axisbg='w')
plt.plot(x_drag_sol_sample, y_drag_sol_sample, '-b', lw=2, label='with drag')
plt.plot(x_ref_sample, y_ref_sample, '--k', lw=2, label='without drag')
plt.axvline(x=xstar, color=[0.3, 0.3, 0.3], lw=1.0)
plt.axhline(y=0.0, color=[0.3, 0.3, 0.3], lw=1.0)
plt.legend()
plt.text(0.1*xstar, 0.6*ystar,
r'$v_0=%5.2f$'%(v0_sol_sample)+'\n'+r'$\theta=%5.2f \pi$'%(theta_sol_sample/np.pi),
fontsize=18)
plt.text(xstar, 0.5*ystar, 'xstar', fontsize=18)
plt.xlabel(r'Horizontal Distance $x$', fontsize=18)
plt.ylabel(r'Height $y$', fontsize=18)
ax1.set_xlim([0.0, 1.5*xstar])
ax1.set_ylim([-0.1*ystar, 1.5*ystar])
plt.savefig('fig_traj_sample.png')
print('figure file genereated: fig_traj_sample.png')
plt.close()
outf.close()
Here is the figure fig_xdrop_v0_theta.png.
Here is the figure fig_traj_sample.png.
Edit 2018-07-15
I realized that I overlooked that the question considers the drag by air. What a shame on me. So, my answer below is not correct. I'm afraid that deleting my answer by myself looks like hiding a mistake, and I leave it below for now. If people think it's annoying that an incorrect answer hanging around, I'm O.K. someone delete it.
The differential equation can actually be solved by hand,
and it does not require much computational resource
to map out how far the person reach from the balcony
on the ground as a function of the initial velocity v0 and the
angle theta. Then, you can select the condition (v0,theta)
such that distance_from_balcony_on_the_ground(v0,theta) = xstar
from this data table.
Let's write the horizontal and vertical coordinates of the
person at time t is x(t) and y(t), respectively.
I think you took x=0 at the wall of the building and y=0
as the ground level, and I do so here, too. Let's say the
horizontal and vertical velocity of the person at time t
are v_x(t) and v_y(t), respectively.
The initial conditions at t=0 are given as
x(0) = 0
y(0) = ystar
v_x(0) = v0 cos theta
v_y(0) = v0 sin theta
The Newton eqution you are solving is,
dx/dt = v_x .. (1)
dy/dt = v_y .. (2)
m d v_x /dt = 0 .. (3)
m d v_y /dt = -m g .. (4)
where m is the mass of the person,
and g is the constant which I don't know the English name of,
but we all know what it is.
From eq. (3),
v_x(t) = v_x(0) = v0 cos theta.
Using this with eq. (1),
x(t) = x(0) + \int_0^t dt' v_x(t') = t v0 cos theta,
where we also used the initial condition. \int_0^t means
integral from 0 to t.
From eq. (4),
v_y(t)
= v_y (0) + \int_0^t dt' (-g)
= v0 sin theta -g t,
where we used the initial condition.
Using this with eq. (3) and also using the initial condition,
y(t)
= y(0) + \int_0^t dt' v_y(t')
= ystar + t v0 sin theta -t^2 (g/2).
where t^2 means t squared.
From the expression for y(t), we can get the time tstar
at which the person hits the ground. That is, y(tstar) =0.
This equation can be solved by quadratic formula
(or something similar name) as
tstar = (v0 sin theta + sqrt((v0 sin theta)^2 + 2g ystar)/g,
where I used a condition tstar>0. Now we know
the distance from the balcony the person reached when he hit
the ground as x(tstar). Using the expression for x(t) above,
x(tstar) = (v0 cos theta) (v0 sin theta + sqrt((v0 sin theta)^2 + 2g ystar))/g.
.. (5)
Actually x(tstar) depends on v0 and theta as well as g and ystar.
You hold g and ystar as constants, and you want to find
all (v0,theta) such that x(tstar) = xstar for a given xstar value.
Since the right hand side of eq. (5) can be computed cheaply,
you can set up grids for v0 and theta and compute xstar
on this 2D grid. Then, you can see where roughly is the solution set
of (v0,theta) lies. If you need precise solution, you can pick up
a segment which encloses the solution from this data table.
Below is a python script that demonstrates this idea.
I also attach here a figure generated by this script.
The yellow curve is the solution set (v0,theta) such that the
person hit the ground at xstar from the wall
when xstar = 8.0*1.95 and ystar=4.0*1.95 as you set.
The blue color coordinate indicates x(tstar), i.e., how far the
person jumped from the balcony horizontally.
Note that at a given v0 (higher than a threshold value aruond v0=9.9),
the there are two theta values (two directions for the person
to project himself) to reach the aimed point (x,y) = (xstar,0).
The smaller branch of the theta value can be negative, meaning that the person can jump downward to reach the aimed point, as long as the initial velocity is sufficiently high.
The script also generates a data file output.dat, which has
the solution-enclosing segments.
#!/usr/bin/python3
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.image as img
mpl.rcParams['lines.linewidth'] = 2
mpl.rcParams['lines.markeredgewidth'] = 1.0
mpl.rcParams['axes.formatter.limits'] = (-4,4)
#mpl.rcParams['axes.formatter.limits'] = (-2,2)
mpl.rcParams['axes.labelsize'] = 'large'
mpl.rcParams['xtick.labelsize'] = 'large'
mpl.rcParams['ytick.labelsize'] = 'large'
mpl.rcParams['xtick.direction'] = 'out'
mpl.rcParams['ytick.direction'] = 'out'
############################################
len_ref = 1.95
xstar = 8.0*len_ref
ystar = 4.0*len_ref
g_earth = 9.81
############################################
ngv0=100
v0min =0.0
v0max =20.0
v0_grid=np.linspace(v0min, v0max, ngv0)
############################################
outf=open('output.dat','w')
print('data file generated: output.dat')
###########################################
def x_at_tstar(v0, theta, ystar, g_earth):
vx = v0*np.cos(theta)
vy = v0*np.sin(theta)
return (vy+np.sqrt(vy**2+2.0*g_earth*ystar))*vx/g_earth
ngtheta=100
theta_min = -0.5*np.pi
theta_max = 0.5*np.pi
theta_grid = np.linspace(theta_min, theta_max, ngtheta)
xdrop=np.empty((ngv0,ngtheta))
# x(t*) as a function of v0 and theta.
for j1 in range(ngv0):
for j2 in range(ngtheta):
xdrop[j1,j2] = x_at_tstar(v0_grid[j1], theta_grid[j2], ystar, g_earth)
outf.write('# domain [theta_lower, theta_upper] that encloses the solution\n')
outf.write('# theta such that x_at_tstart(v0,theta, ystart, g_earth)=xstar\n')
outf.write('# v0 theta_lower theta_upper x_lower x_upper\n')
for j1 in range(ngv0):
for j2 in range(ngtheta-1):
if (xdrop[j1,j2+1]-xstar)*(xdrop[j1,j2]-xstar)<=0.0:
outf.write('%26.16e %26.16e %26.16e %26.16e %26.16e\n'
%(v0_grid[j1], theta_grid[j2], theta_grid[j2+1],
xdrop[j1,j2], xdrop[j1,j2+1]))
print('See output.dat for the segments enclosing solutions.')
print('You can hunt further for precise solutions using this data.')
#######################################################################
# canvas setting
#######################################################################
f_size = (8,5)
#
a1_left = 0.15
a1_bottom = 0.15
a1_width = 0.65
a1_height = 0.80
#
hspace=0.02
#
ac_left = a1_left+a1_width+hspace
ac_bottom = a1_bottom
ac_width = 0.03
ac_height = a1_height
###########################################
############################################
# plot
############################################
print('plotting..')
fig1=plt.figure(figsize=f_size)
ax1 =plt.axes([a1_left, a1_bottom, a1_width, a1_height], axisbg='w')
im1=img.NonUniformImage(ax1,
interpolation='bilinear', \
cmap=mpl.cm.Blues, \
norm=mpl.colors.Normalize(vmin=0.0,
vmax=np.max(xdrop), clip=True))
im1.set_data(v0_grid, theta_grid/np.pi, np.transpose(xdrop))
ax1.images.append(im1)
plt.contour(v0_grid, theta_grid/np.pi, np.transpose(xdrop), [xstar], colors='y')
plt.xlabel(r'Initial Velocity $v_0$', fontsize=18)
plt.ylabel(r'Angle of Projection $\theta/\pi$', fontsize=18)
plt.yticks([-0.50, -0.25, 0.0, 0.25, 0.50])
ax1.set_xlim([v0min, v0max])
ax1.set_ylim([theta_min/np.pi, theta_max/np.pi])
axc =plt.axes([ac_left, ac_bottom, ac_width, ac_height], axisbg='w')
mpl.colorbar.Colorbar(axc,im1)
# 'Distance from Blacony $x(t^*)$'
plt.savefig('fig_xdrop_v0_theta.png')
print('figure file genereated: fig_xdrop_v0_theta.png')
plt.close()
outf.close()
So after some trying out I found a way to achieve what I wanted... It is the brute force method that I mentioned in my starting post, but at least now it works...
The idea is quite simple: define a function find_v0 which finds for a given theta a v0. In this function you take a starting value for v0 (I choose 8 but this was just a guess from me), then take the starting value and check with the difference function how far away the interesting point is from (xstar,0). The interesting point in this case can be determined by setting all points on the x-axis that are bigger than xstar to zero (and their corresponding y-values) and then trimming of all the zeros with trim_zeros, now the last element of correspond to the desired output. If the output of the difference function is smaller than a critical value (in my case 0.1) pass the current v0 on, if not, increase it by 0.01 and do the same thing again.
The code for this looks like this (again replacing 3) and 4) ):
th = np.linspace(0,np.pi/3,100)
def find_v0(theta):
v0=8
while(True):
v0x = v0 * np.cos(theta)
v0y = v0 * np.sin(theta)
z0 = np.array([0, v0x, ystar, v0y])
# Calculate solution
t, z = explicit_midpoint(rhs, z0, 5, 1000)
for k in range(1001):
if z[k,0] > xstar:
z[k,0] = 0
z[k,2] = 0
x = np.trim_zeros(z[:,0])
y = np.trim_zeros(z[:,2])
diff = difference(x[-1],y[-1])
if diff < 0.1:
break
else: v0+=0.01
return v0#,x,y[0:]
v0 = np.zeros_like(th)
from tqdm import tqdm
count=0
for k in tqdm(th):
v0[count] = find_v0(k)
count+=1
v0_interp = interpolate.interp1d(th,v0)
plt.figure()
plt.plot(th,v0_interp(th),"g")
plt.grid(True)
plt.xlabel(r"$\theta$")
plt.ylabel(r"$v_0$")
plt.show()
The problem with this thing is that it takes forever to compute (with the current settings around 5-6 mins). If anyone has some hints how to improve the code to get a little bit faster or has a different approach it would be still appreciated.
Assuming that the velocity in x direction never goes down to zero, you can take x as independent parameter instead of the time. The state vector is then time, position, velocity and the vector field in this state space is scaled so that the vx component is always 1. Then integrate from zero to xstar to compute the state (approximation) where the trajectory meets xstar as x-value.
def derivs(u,x):
t,x,y,vx,vy = u
v = hypot(vx,vy)
ax = -lam*v*vx
ay = -lam*v*vy - g
return [ 1/vx, 1, vy/vx, ax/vx, ay/vx ]
odeint(derivs, [0, x0, y0, vx0, vy0], [0, xstar])
or with your own integration method. I used odeint as documented interface to show how this derivatives function is used in the integration.
The resulting time and y-value can be extreme
I have an application that requires a disk populated with 'n' points in a quasi-random fashion. I want the points to be somewhat random, but still have a more or less regular density over the disk.
My current method is to place a point, check if it's inside the disk, and then check if it is also far enough away from all other points already kept. My code is below:
import os
import random
import math
# ------------------------------------------------ #
# geometric constants
center_x = -1188.2
center_y = -576.9
center_z = -3638.3
disk_distance = 2.0*5465.6
disk_diam = 5465.6
# ------------------------------------------------ #
pts_per_disk = 256
closeness_criteria = 200.0
min_closeness_criteria = disk_diam/closeness_criteria
disk_center = [(center_x-disk_distance),center_y,center_z]
pts_in_disk = []
while len(pts_in_disk) < (pts_per_disk):
potential_pt_x = disk_center[0]
potential_pt_dy = random.uniform(-disk_diam/2.0, disk_diam/2.0)
potential_pt_y = disk_center[1]+potential_pt_dy
potential_pt_dz = random.uniform(-disk_diam/2.0, disk_diam/2.0)
potential_pt_z = disk_center[2]+potential_pt_dz
potential_pt_rad = math.sqrt((potential_pt_dy)**2+(potential_pt_dz)**2)
if potential_pt_rad < (disk_diam/2.0):
far_enough_away = True
for pt in pts_in_disk:
if math.sqrt((potential_pt_x - pt[0])**2+(potential_pt_y - pt[1])**2+(potential_pt_z - pt[2])**2) > min_closeness_criteria:
pass
else:
far_enough_away = False
break
if far_enough_away:
pts_in_disk.append([potential_pt_x,potential_pt_y,potential_pt_z])
outfile_name = "pt_locs_x_lo_"+str(pts_per_disk)+"_pts.txt"
outfile = open(outfile_name,'w')
for pt in pts_in_disk:
outfile.write(" ".join([("%.5f" % (pt[0]/1000.0)),("%.5f" % (pt[1]/1000.0)),("%.5f" % (pt[2]/1000.0))])+'\n')
outfile.close()
In order to get the most even point density, what I do is basically iteratively run this script using another script, with the 'closeness' criteria reduced for each successive iteration. At some point, the script can not finish, and I just use the points of the last successful iteration.
So my question is rather broad: is there a better way to do this? My method is ok for now, but my gut says that there is a better way to generate such a field of points.
An illustration of the output is graphed below, one with a high closeness criteria, and another with a 'lowest found' closeness criteria (what I want).
A simple solution based on Disk Point Picking from MathWorld:
import numpy as np
import matplotlib.pyplot as plt
n = 1000
r = np.random.uniform(low=0, high=1, size=n) # radius
theta = np.random.uniform(low=0, high=2*np.pi, size=n) # angle
x = np.sqrt(r) * np.cos(theta)
y = np.sqrt(r) * np.sin(theta)
# for plotting circle line:
a = np.linspace(0, 2*np.pi, 500)
cx,cy = np.cos(a), np.sin(a)
fg, ax = plt.subplots(1, 1)
ax.plot(cx, cy,'-', alpha=.5) # draw unit circle line
ax.plot(x, y, '.') # plot random points
ax.axis('equal')
ax.grid(True)
fg.canvas.draw()
plt.show()
It gives.
Alternatively, you also could create a regular grid and distort it randomly:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.tri as tri
n = 20
tt = np.linspace(-1, 1, n)
xx, yy = np.meshgrid(tt, tt) # create unit square grid
s_x, s_y = xx.ravel(), yy.ravel()
ii = np.argwhere(s_x**2 + s_y**2 <= 1).ravel() # mask off unwanted points
x, y = s_x[ii], s_y[ii]
triang = tri.Triangulation(x, y) # create triangluar grid
# distort the grid
g = .5 # distortion factor
rx = x + np.random.uniform(low=-g/n, high=g/n, size=x.shape)
ry = y + np.random.uniform(low=-g/n, high=g/n, size=y.shape)
rtri = tri.Triangulation(rx, ry, triang.triangles) # distorted grid
# for circle:
a = np.linspace(0, 2*np.pi, 500)
cx,cy = np.cos(a), np.sin(a)
fg, ax = plt.subplots(1, 1)
ax.plot(cx, cy,'k-', alpha=.2) # circle line
ax.triplot(triang, "g-", alpha=.4)
ax.triplot(rtri, 'b-', alpha=.5)
ax.axis('equal')
ax.grid(True)
fg.canvas.draw()
plt.show()
It gives
The triangles are just there for visualization. The obvious disadvantage is that depending on your choice of grid, either in the middle or on the borders (as shown here), there will be more or less large "holes" due to the grid discretization.
If you have a defined area like a disc (circle) that you wish to generate random points within you are better off using an equation for a circle and limiting on the radius:
x^2 + y^2 = r^2 (0 < r < R)
or parametrized to two variables
cos(a) = x/r
sin(a) = y/r
sin^2(a) + cos^2(a) = 1
To generate something like the pseudo-random distribution with low density you should take the following approach:
For randomly distributed ranges of r and a choose n points.
This allows you to generate your distribution to roughly meet your density criteria.
To understand why this works imagine your circle first divided into small rings of length dr, now imagine your circle divided into pie slices of angle da. Your randomness now has equal probability over the whole boxed area arou d the circle. If you divide the areas of allowed randomness throughout your circle you will get a more even distribution around the overall circle and small random variation for the individual areas giving you the psudo-random look and feel you are after.
Now your job is just to generate n points for each given area. You will want to have n be dependant on r as the area of each division changes as you move out of the circle. You can proportion this to the exact change in area each space brings:
for the n-th to n+1-th ring:
d(Area,n,n-1) = Area(n) - Area(n-1)
The area of any given ring is:
Area = pi*(dr*n)^2 - pi*(dr*(n-1))
So the difference becomes:
d(Area,n,n-1) = [pi*(dr*n)^2 - pi*(dr*(n-1))^2] - [pi*(dr*(n-1))^2 - pi*(dr*(n-2))^2]
d(Area,n,n-1) = pi*[(dr*n)^2 - 2*(dr*(n-1))^2 + (dr*(n-2))^2]
You could expound this to gain some insight on how much n should increase but it may be faster to just guess at some percentage increase (30%) or something.
The example I have provided is a small subset and decreasing da and dr will dramatically improve your results.
Here is some rough code for generating such points:
import random
import math
R = 10.
n_rings = 10.
n_angles = 10.
dr = 10./n_rings
da = 2*math.pi/n_angles
base_points_per_division = 3
increase_per_level = 1.1
points = []
ring = 0
while ring < n_rings:
angle = 0
while angle < n_angles:
for i in xrange(int(base_points_per_division)):
ra = angle*da + da*math.random()
rr = r*dr + dr*random.random()
x = rr*math.cos(ra)
y = rr*math.sin(ra)
points.append((x,y))
angle += 1
base_points_per_division = base_points_per_division*increase_per_level
ring += 1
I tested it with the parameters:
n_rings = 20
n_angles = 20
base_points = .9
increase_per_level = 1.1
And got the following results:
It looks more dense than your provided image, but I imagine further tweaking of those variables could be beneficial.
You can add an additional part to scale the density properly by calculating the number of points per ring.
points_per_ring = densitymath.pi(dr**2)*(2*n+1)
points_per_division = points_per_ring/n_angles
This will provide a an even better scaled distribution.
density = .03
points = []
ring = 0
while ring < n_rings:
angle = 0
base_points_per_division = density*math.pi*(dr**2)*(2*ring+1)/n_angles
while angle < n_angles:
for i in xrange(int(base_points_per_division)):
ra = angle*da + min(da,da*random.random())
rr = ring*dr + dr*random.random()
x = rr*math.cos(ra)
y = rr*math.sin(ra)
points.append((x,y))
angle += 1
ring += 1
Giving better results using the following parameters
R = 1.
n_rings = 10.
n_angles = 10.
density = 10/(dr*da) # ~ ten points per unit area
With a graph...
and for fun you can graph the divisions to see how well it is matching your distriubtion and adjust.
Depending on how random the points need to be, it may be simple enough to just make a grid of points within the disk, and then displace each point by some small but random amount.
It may be that you want more randomness, but if you just want to fill your disc with an even-looking distribution of points that aren't on an obvious grid, you could try a spiral with a random phase.
import math
import random
import pylab
n = 300
alpha = math.pi * (3 - math.sqrt(5)) # the "golden angle"
phase = random.random() * 2 * math.pi
points = []
for k in xrange(n):
theta = k * alpha + phase
r = math.sqrt(float(k)/n)
points.append((r * math.cos(theta), r * math.sin(theta)))
pylab.scatter(*zip(*points))
pylab.show()
Probability theory ensures that the rejection method is an appropriate method
to generate uniformly distributed points within the disk, D(0,r), centered at origin and of radius r. Namely, one generates points within the square [-r,r] x [-r,r], until a point falls within the disk:
do{
generate P in [-r,r]x[-r,r];
}while(P[0]**2+P[1]**2>r);
return P;
unif_rnd_disk is a generator function implementing this rejection method:
import matplotlib.pyplot as plt
import numpy as np
import itertools
def unif_rnd_disk(r=1.0):
pt=np.zeros(2)
while True:
yield pt
while True:
pt=-r+2*r*np.random.random(2)
if (pt[0]**2+pt[1]**2<=r):
break
G=unif_rnd_disk()# generator of points in disk D(0,r=1)
X,Y=zip(*[pt for pt in itertools.islice(G, 1, 1000)])
plt.scatter(X, Y, color='r', s=3)
plt.axis('equal')
If we want to generate points in a disk centered at C(a,b), we have to apply a translation to the points in the disk D(0,r):
C=[2.0, -3.5]
plt.scatter(C[0]+np.array(X), C[1]+np.array(Y), color='r', s=3)
plt.axis('equal')
I am trying to use circle fitting code for 3D data set. I have modified it for 3D points just adding z-coordinate where necessary. My modification works fine for one set of points and works bad for another. Please look at the code, if it has some errors.
import trig_items
import numpy as np
from trig_items import *
from numpy import *
from matplotlib import pyplot as p
from scipy import optimize
# Coordinates of the 3D points
##x = r_[36, 36, 19, 18, 33, 26]
##y = r_[14, 10, 28, 31, 18, 26]
##z = r_[0, 1, 2, 3, 4, 5]
x = r_[ 2144.18908574, 2144.26880854, 2144.05552972, 2143.90303742, 2143.62520676,
2143.43628579, 2143.14005775, 2142.79919654, 2142.51436023, 2142.11240866,
2141.68564346, 2141.29333828, 2140.92596405, 2140.3475612, 2139.90848046,
2139.24661021, 2138.67384709, 2138.03313547, 2137.40301734, 2137.40908256,
2137.06611224, 2136.50943781, 2136.0553113, 2135.50313189, 2135.07049922,
2134.62098139, 2134.10459535, 2133.50838433, 2130.6600465, 2130.03537342,
2130.04047644, 2128.83522468, 2127.79827542, 2126.43513385, 2125.36700593,
2124.00350543, 2122.68564431, 2121.20709478, 2119.79047011, 2118.38417647,
2116.90063343, 2115.52685778, 2113.82246629, 2112.21159431, 2110.63180117,
2109.00713198, 2108.94434529, 2106.82777156, 2100.62343757, 2098.5090226,
2096.28787738, 2093.91550703, 2091.66075061, 2089.15316429, 2086.69753869,
2084.3002414, 2081.87590579, 2079.19141866, 2076.5394574, 2073.89128676,
2071.18786213]
y = r_[ 725.74913818, 724.43874065, 723.15226506, 720.45950581, 717.77827954,
715.07048092, 712.39633862, 709.73267688, 707.06039438, 704.43405908,
701.80074596, 699.15371526, 696.5309022, 693.96109921, 691.35585501,
688.83496327, 686.32148661, 683.80286662, 681.30705568, 681.30530975,
679.66483676, 678.01922321, 676.32721779, 674.6667554, 672.9658024,
671.23686095, 669.52021535, 667.84999077, 659.19757984, 657.46179949,
657.45700508, 654.46901086, 651.38177517, 648.41739432, 645.32356976,
642.39034578, 639.42628453, 636.51107198, 633.57732055, 630.63825133,
627.75308356, 624.80162215, 622.01980232, 619.18814892, 616.37688894,
613.57400131, 613.61535723, 610.4724493, 600.98277781, 597.84782844,
594.75983001, 591.77946964, 588.74874068, 585.84525834, 582.92311166,
579.99564481, 577.06666417, 574.30782762, 571.54115037, 568.79760614,
566.08551098]
z = r_[ 339.77146775, 339.60021095, 339.47645894, 339.47130963, 339.37216218,
339.4126132, 339.67942046, 339.40917728, 339.39500353, 339.15041461,
339.38959195, 339.3358209, 339.47764895, 339.17854867, 339.14624071,
339.16403926, 339.02308811, 339.27011082, 338.97684183, 338.95087698,
338.97321177, 339.02175448, 339.02543922, 338.88725411, 339.06942374,
339.0557553, 339.04414618, 338.89234303, 338.95572249, 339.00880416,
339.00413073, 338.91080374, 338.98214758, 339.01135789, 338.96393537,
338.73446188, 338.62784913, 338.72443217, 338.74880562, 338.69090173,
338.50765186, 338.49056867, 338.57353355, 338.6196255, 338.43754399,
338.27218569, 338.10587265, 338.43880881, 338.28962141, 338.14338705,
338.25784154, 338.49792568, 338.15572139, 338.52967693, 338.4594245,
338.1511823, 338.03711207, 338.19144663, 338.22022045, 338.29032321,
337.8623197 ]
# coordinates of the barycenter
xm = mean(x)
ym = mean(y)
zm = mean(z)
### Basic usage of optimize.leastsq
def calc_R(xc, yc, zc):
""" calculate the distance of each 3D points from the center (xc, yc, zc) """
return sqrt((x - xc) ** 2 + (y - yc) ** 2 + (z - zc) ** 2)
def func(c):
""" calculate the algebraic distance between the 3D points and the mean circle centered at c=(xc, yc, zc) """
Ri = calc_R(*c)
return Ri - Ri.mean()
center_estimate = xm, ym, zm
center, ier = optimize.leastsq(func, center_estimate)
##print center
xc, yc, zc = center
Ri = calc_R(xc, yc, zc)
R = Ri.mean()
residu = sum((Ri - R)**2)
print 'R =', R
So, for the first set of x, y, z (commented in the code) it works well: the output is R = 39.0097846735. If I run the code with the second set of points (uncommented) the resulting radius is R = 108576.859834, which is almost straight line. I plotted the last one.
The blue points is a given data set, the red ones is the arc of the resulting radius R = 108576.859834. It is obvious that the given data set has much smaller radius than the result.
Here is another set of points.
It is clear that the least squares does not work correctly.
Please help me solving this issue.
UPDATE
Here is my solution:
### fit 3D arc into a set of 3D points ###
### output is the centre and the radius of the arc ###
def fitArc3d(arr, eps = 0.0001):
# Coordinates of the 3D points
x = numpy.array([arr[k][0] for k in range(len(arr))])
y = numpy.array([arr[k][4] for k in range(len(arr))])
z = numpy.array([arr[k][5] for k in range(len(arr))])
# coordinates of the barycenter
xm = mean(x)
ym = mean(y)
zm = mean(z)
### gradient descent minimisation method ###
pnts = [[x[k], y[k], z[k]] for k in range(len(x))]
meanP = Point(xm, ym, zm) # mean point
Ri = [Point(*meanP).distance(Point(*pnts[k])) for k in range(len(pnts))] # radii to the points
Rm = math.fsum(Ri) / len(Ri) # mean radius
dR = Rm + 10 # difference between mean radii
alpha = 0.1
c = meanP
cArr = []
while dR > eps:
cArr.append(c)
Jx = math.fsum([2 * (x[k] - c[0]) * (Ri[k] - Rm) / Ri[k] for k in range(len(Ri))])
Jy = math.fsum([2 * (y[k] - c[1]) * (Ri[k] - Rm) / Ri[k] for k in range(len(Ri))])
Jz = math.fsum([2 * (z[k] - c[2]) * (Ri[k] - Rm) / Ri[k] for k in range(len(Ri))])
gradJ = [Jx, Jy, Jz] # find gradient
c = [c[k] + alpha * gradJ[k] for k in range(len(c)) if len(c) == len(gradJ)] # find new centre point
Ri = [Point(*c).distance(Point(*pnts[k])) for k in range(len(pnts))] # calculate new radii
RmOld = Rm
Rm = math.fsum(Ri) / len(Ri) # calculate new mean radius
dR = abs(Rm - RmOld) # new difference between mean radii
return Point(*c), Rm
It is not very optimal code (I do not have time to fine tune it) but it works.
I guess the problem is the data and the corresponding algorithm. The least square method works fine if it produces a local parabolic minimum, such that a simple gradient method goes approximately direction minimum. Unfortunately, this is not necessarily the case for your data. You can check this by keeping some rough estimates for xc and yc fixed and plotting the sum of the squared residuals as a function of zc and R. I get a boomerang shaped minimum. Depending on your starting parameters you might end in one of the branches going away from the real minimum. Once in the valley this can be very flat such that you exceed the number of max iterations or get something that is accepted within the tolerance of the algorithm. As always, thinks are better the better your starting parameters. Unfortunately you have only a small arc of the circle, so that it is difficult to get better. I am not a specialist in Python, but I think that leastsq allows you to play with the Jacobian and Gradient Methods. Try to play with the tolerance as well.
In short: the code looks basically fine to me, but your data is pathological and you have to adapt the code to that kind of data.
There is a non-iterative solution in 2D from Karimäki, maybe you can adapt
this method to 3D. You can also look at this. Sure you will find more literature.
I just checked the data using a Simplex-Algorithm. The minimum is, as I said, not well behaved. See here some cuts of the residual function. Only in the xy-plane you get some reasonable behavior. The properties of the zr- and xr- plane make the finding process very difficult.
So in the beginning the simplex algorithm finds several almost stable solutions. You can see them as flat steps in the graph below (blue x, purple y, yellow z, green R). At the end the algorithm has to walk down the almost flat but very stretched out valley, resulting in the final conversion of z and R. Nevertheless, I expect many regions that look like a solution if the tolerance is insufficient. With the standard tolerance of 10^-5 the algoritm stopped after approx 350 iterations. I had to set it to 10^-10 to get this solution, i.e. [1899.32, 741.874, 298.696, 248.956], which seems quite ok.
Update
As mentioned earlier, the solution depends on the working precision and requested accuracy. So your hand made gradient method works probably better as these values are different compared to the build-in least square fit. Nevertheless, this is my version making a two step fit. First I fit a plane to the data. In a next step I fit a circle within this plane. Both steps use the least square method. This time it works, as each step avoids critically shaped minima. (Naturally, the plane fit runs into problems if the arc segment becomes small and the data lies virtually on a straight line. But this will happen for all algorithms)
from math import *
from matplotlib import pyplot as plt
from scipy import optimize
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import pprint as pp
dataTupel=zip(xs,ys,zs) #your data from above
# Fitting a plane first
# let the affine plane be defined by two vectors,
# the zero point P0 and the plane normal n0
# a point p is member of the plane if (p-p0).n0 = 0
def distanceToPlane(p0,n0,p):
return np.dot(np.array(n0),np.array(p)-np.array(p0))
def residualsPlane(parameters,dataPoint):
px,py,pz,theta,phi = parameters
nx,ny,nz =sin(theta)*cos(phi),sin(theta)*sin(phi),cos(theta)
distances = [distanceToPlane([px,py,pz],[nx,ny,nz],[x,y,z]) for x,y,z in dataPoint]
return distances
estimate = [1900, 700, 335,0,0] # px,py,pz and zeta, phi
#you may automize this by using the center of mass data
# note that the normal vector is given in polar coordinates
bestFitValues, ier = optimize.leastsq(residualsPlane, estimate, args=(dataTupel))
xF,yF,zF,tF,pF = bestFitValues
point = [xF,yF,zF]
normal = [sin(tF)*cos(pF),sin(tF)*sin(pF),cos(tF)]
# Fitting a circle inside the plane
#creating two inplane vectors
sArr=np.cross(np.array([1,0,0]),np.array(normal))#assuming that normal not parallel x!
sArr=sArr/np.linalg.norm(sArr)
rArr=np.cross(sArr,np.array(normal))
rArr=rArr/np.linalg.norm(rArr)#should be normalized already, but anyhow
def residualsCircle(parameters,dataPoint):
r,s,Ri = parameters
planePointArr = s*sArr + r*rArr + np.array(point)
distance = [ np.linalg.norm( planePointArr-np.array([x,y,z])) for x,y,z in dataPoint]
res = [(Ri-dist) for dist in distance]
return res
estimateCircle = [0, 0, 335] # px,py,pz and zeta, phi
bestCircleFitValues, ier = optimize.leastsq(residualsCircle, estimateCircle,args=(dataTupel))
rF,sF,RiF = bestCircleFitValues
print bestCircleFitValues
# Synthetic Data
centerPointArr=sF*sArr + rF*rArr + np.array(point)
synthetic=[list(centerPointArr+ RiF*cos(phi)*rArr+RiF*sin(phi)*sArr) for phi in np.linspace(0, 2*pi,50)]
[cxTupel,cyTupel,czTupel]=[ x for x in zip(*synthetic)]
### Plotting
d = -np.dot(np.array(point),np.array(normal))# dot product
# create x,y mesh
xx, yy = np.meshgrid(np.linspace(2000,2200,10), np.linspace(540,740,10))
# calculate corresponding z
# Note: does not work if normal vector is without z-component
z = (-normal[0]*xx - normal[1]*yy - d)/normal[2]
# plot the surface, data, and synthetic circle
fig = plt.figure()
ax = fig.add_subplot(211, projection='3d')
ax.scatter(xs, ys, zs, c='b', marker='o')
ax.plot_wireframe(xx,yy,z)
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
bx = fig.add_subplot(212, projection='3d')
bx.scatter(xs, ys, zs, c='b', marker='o')
bx.scatter(cxTupel,cyTupel,czTupel, c='r', marker='o')
bx.set_xlabel('X Label')
bx.set_ylabel('Y Label')
bx.set_zlabel('Z Label')
plt.show()
which give a radius of 245. This is close to what the other approach gave (249). So within error margins I get the same.
The plotted result looks reasonable.
Hope this helps.
Feel like you missed some constraints in your 1st version code. The implementation could be explained as fitting a sphere to 3d points. So that's why the 2nd radius for 2nd data list is almost straight line. It's thinking like you are giving it a small circle on a large sphere.