Method to uniformly randomly populate a disk with points in python - python
I have an application that requires a disk populated with 'n' points in a quasi-random fashion. I want the points to be somewhat random, but still have a more or less regular density over the disk.
My current method is to place a point, check if it's inside the disk, and then check if it is also far enough away from all other points already kept. My code is below:
import os
import random
import math
# ------------------------------------------------ #
# geometric constants
center_x = -1188.2
center_y = -576.9
center_z = -3638.3
disk_distance = 2.0*5465.6
disk_diam = 5465.6
# ------------------------------------------------ #
pts_per_disk = 256
closeness_criteria = 200.0
min_closeness_criteria = disk_diam/closeness_criteria
disk_center = [(center_x-disk_distance),center_y,center_z]
pts_in_disk = []
while len(pts_in_disk) < (pts_per_disk):
potential_pt_x = disk_center[0]
potential_pt_dy = random.uniform(-disk_diam/2.0, disk_diam/2.0)
potential_pt_y = disk_center[1]+potential_pt_dy
potential_pt_dz = random.uniform(-disk_diam/2.0, disk_diam/2.0)
potential_pt_z = disk_center[2]+potential_pt_dz
potential_pt_rad = math.sqrt((potential_pt_dy)**2+(potential_pt_dz)**2)
if potential_pt_rad < (disk_diam/2.0):
far_enough_away = True
for pt in pts_in_disk:
if math.sqrt((potential_pt_x - pt[0])**2+(potential_pt_y - pt[1])**2+(potential_pt_z - pt[2])**2) > min_closeness_criteria:
pass
else:
far_enough_away = False
break
if far_enough_away:
pts_in_disk.append([potential_pt_x,potential_pt_y,potential_pt_z])
outfile_name = "pt_locs_x_lo_"+str(pts_per_disk)+"_pts.txt"
outfile = open(outfile_name,'w')
for pt in pts_in_disk:
outfile.write(" ".join([("%.5f" % (pt[0]/1000.0)),("%.5f" % (pt[1]/1000.0)),("%.5f" % (pt[2]/1000.0))])+'\n')
outfile.close()
In order to get the most even point density, what I do is basically iteratively run this script using another script, with the 'closeness' criteria reduced for each successive iteration. At some point, the script can not finish, and I just use the points of the last successful iteration.
So my question is rather broad: is there a better way to do this? My method is ok for now, but my gut says that there is a better way to generate such a field of points.
An illustration of the output is graphed below, one with a high closeness criteria, and another with a 'lowest found' closeness criteria (what I want).
A simple solution based on Disk Point Picking from MathWorld:
import numpy as np
import matplotlib.pyplot as plt
n = 1000
r = np.random.uniform(low=0, high=1, size=n) # radius
theta = np.random.uniform(low=0, high=2*np.pi, size=n) # angle
x = np.sqrt(r) * np.cos(theta)
y = np.sqrt(r) * np.sin(theta)
# for plotting circle line:
a = np.linspace(0, 2*np.pi, 500)
cx,cy = np.cos(a), np.sin(a)
fg, ax = plt.subplots(1, 1)
ax.plot(cx, cy,'-', alpha=.5) # draw unit circle line
ax.plot(x, y, '.') # plot random points
ax.axis('equal')
ax.grid(True)
fg.canvas.draw()
plt.show()
It gives.
Alternatively, you also could create a regular grid and distort it randomly:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.tri as tri
n = 20
tt = np.linspace(-1, 1, n)
xx, yy = np.meshgrid(tt, tt) # create unit square grid
s_x, s_y = xx.ravel(), yy.ravel()
ii = np.argwhere(s_x**2 + s_y**2 <= 1).ravel() # mask off unwanted points
x, y = s_x[ii], s_y[ii]
triang = tri.Triangulation(x, y) # create triangluar grid
# distort the grid
g = .5 # distortion factor
rx = x + np.random.uniform(low=-g/n, high=g/n, size=x.shape)
ry = y + np.random.uniform(low=-g/n, high=g/n, size=y.shape)
rtri = tri.Triangulation(rx, ry, triang.triangles) # distorted grid
# for circle:
a = np.linspace(0, 2*np.pi, 500)
cx,cy = np.cos(a), np.sin(a)
fg, ax = plt.subplots(1, 1)
ax.plot(cx, cy,'k-', alpha=.2) # circle line
ax.triplot(triang, "g-", alpha=.4)
ax.triplot(rtri, 'b-', alpha=.5)
ax.axis('equal')
ax.grid(True)
fg.canvas.draw()
plt.show()
It gives
The triangles are just there for visualization. The obvious disadvantage is that depending on your choice of grid, either in the middle or on the borders (as shown here), there will be more or less large "holes" due to the grid discretization.
If you have a defined area like a disc (circle) that you wish to generate random points within you are better off using an equation for a circle and limiting on the radius:
x^2 + y^2 = r^2 (0 < r < R)
or parametrized to two variables
cos(a) = x/r
sin(a) = y/r
sin^2(a) + cos^2(a) = 1
To generate something like the pseudo-random distribution with low density you should take the following approach:
For randomly distributed ranges of r and a choose n points.
This allows you to generate your distribution to roughly meet your density criteria.
To understand why this works imagine your circle first divided into small rings of length dr, now imagine your circle divided into pie slices of angle da. Your randomness now has equal probability over the whole boxed area arou d the circle. If you divide the areas of allowed randomness throughout your circle you will get a more even distribution around the overall circle and small random variation for the individual areas giving you the psudo-random look and feel you are after.
Now your job is just to generate n points for each given area. You will want to have n be dependant on r as the area of each division changes as you move out of the circle. You can proportion this to the exact change in area each space brings:
for the n-th to n+1-th ring:
d(Area,n,n-1) = Area(n) - Area(n-1)
The area of any given ring is:
Area = pi*(dr*n)^2 - pi*(dr*(n-1))
So the difference becomes:
d(Area,n,n-1) = [pi*(dr*n)^2 - pi*(dr*(n-1))^2] - [pi*(dr*(n-1))^2 - pi*(dr*(n-2))^2]
d(Area,n,n-1) = pi*[(dr*n)^2 - 2*(dr*(n-1))^2 + (dr*(n-2))^2]
You could expound this to gain some insight on how much n should increase but it may be faster to just guess at some percentage increase (30%) or something.
The example I have provided is a small subset and decreasing da and dr will dramatically improve your results.
Here is some rough code for generating such points:
import random
import math
R = 10.
n_rings = 10.
n_angles = 10.
dr = 10./n_rings
da = 2*math.pi/n_angles
base_points_per_division = 3
increase_per_level = 1.1
points = []
ring = 0
while ring < n_rings:
angle = 0
while angle < n_angles:
for i in xrange(int(base_points_per_division)):
ra = angle*da + da*math.random()
rr = r*dr + dr*random.random()
x = rr*math.cos(ra)
y = rr*math.sin(ra)
points.append((x,y))
angle += 1
base_points_per_division = base_points_per_division*increase_per_level
ring += 1
I tested it with the parameters:
n_rings = 20
n_angles = 20
base_points = .9
increase_per_level = 1.1
And got the following results:
It looks more dense than your provided image, but I imagine further tweaking of those variables could be beneficial.
You can add an additional part to scale the density properly by calculating the number of points per ring.
points_per_ring = densitymath.pi(dr**2)*(2*n+1)
points_per_division = points_per_ring/n_angles
This will provide a an even better scaled distribution.
density = .03
points = []
ring = 0
while ring < n_rings:
angle = 0
base_points_per_division = density*math.pi*(dr**2)*(2*ring+1)/n_angles
while angle < n_angles:
for i in xrange(int(base_points_per_division)):
ra = angle*da + min(da,da*random.random())
rr = ring*dr + dr*random.random()
x = rr*math.cos(ra)
y = rr*math.sin(ra)
points.append((x,y))
angle += 1
ring += 1
Giving better results using the following parameters
R = 1.
n_rings = 10.
n_angles = 10.
density = 10/(dr*da) # ~ ten points per unit area
With a graph...
and for fun you can graph the divisions to see how well it is matching your distriubtion and adjust.
Depending on how random the points need to be, it may be simple enough to just make a grid of points within the disk, and then displace each point by some small but random amount.
It may be that you want more randomness, but if you just want to fill your disc with an even-looking distribution of points that aren't on an obvious grid, you could try a spiral with a random phase.
import math
import random
import pylab
n = 300
alpha = math.pi * (3 - math.sqrt(5)) # the "golden angle"
phase = random.random() * 2 * math.pi
points = []
for k in xrange(n):
theta = k * alpha + phase
r = math.sqrt(float(k)/n)
points.append((r * math.cos(theta), r * math.sin(theta)))
pylab.scatter(*zip(*points))
pylab.show()
Probability theory ensures that the rejection method is an appropriate method
to generate uniformly distributed points within the disk, D(0,r), centered at origin and of radius r. Namely, one generates points within the square [-r,r] x [-r,r], until a point falls within the disk:
do{
generate P in [-r,r]x[-r,r];
}while(P[0]**2+P[1]**2>r);
return P;
unif_rnd_disk is a generator function implementing this rejection method:
import matplotlib.pyplot as plt
import numpy as np
import itertools
def unif_rnd_disk(r=1.0):
pt=np.zeros(2)
while True:
yield pt
while True:
pt=-r+2*r*np.random.random(2)
if (pt[0]**2+pt[1]**2<=r):
break
G=unif_rnd_disk()# generator of points in disk D(0,r=1)
X,Y=zip(*[pt for pt in itertools.islice(G, 1, 1000)])
plt.scatter(X, Y, color='r', s=3)
plt.axis('equal')
If we want to generate points in a disk centered at C(a,b), we have to apply a translation to the points in the disk D(0,r):
C=[2.0, -3.5]
plt.scatter(C[0]+np.array(X), C[1]+np.array(Y), color='r', s=3)
plt.axis('equal')
Related
Problem with earth orbit plot using python
I am trying to write a code for the orbit of the earth in SI using a symplectic integrator, my attempt is as follows: import numpy as np import matplotlib.pyplot as plt #Set parameters G = 6.67348e-11 mEar = 5.972e24 mSun = 1.989e30 def earth_orbit(x0, y0, vx0, vy0, N): dt = 1/N #timestep pos_arr = np.zeros((N,2)) #empty array to store position vel_arr = np.zeros((N,2)) #empty array to store velocities #Initial conditions # x0 = x # y0 = y # vx0 = vx # vy0 = vy pos_arr[0] = (x0,y0) #set the intial positions in the array vel_arr[0] = (vx0,vy0) #set the initial velocities in the array #Implement Verlet Algorithm for k in range (N-1): pos_arr[k+1] = pos_arr[k] + vel_arr[k]*dt #update positions force = -G * mSun * mEar * pos_arr[k+1] / (np.linalg.norm(pos_arr[k+1])**3) #force calculation vel_arr[k+1] = vel_arr[k] + (force/mEar) * dt #update velocities #Plot: plt.plot(pos_arr, 'go', markersize = 1, label = 'Earth trajectory') # plt.plot(0,0,'yo', label = 'Sun positon') # yellow marker # plt.plot(pos_arr[0],'bo', label = 'Earth initial positon') # dark blue marker plt.axis('equal') plt.xlabel ('x') plt.ylabel ('y') return pos_arr, vel_arr earth_orbit(149.59787e9, 0, 0, 29800, 1000) The output is 2 dots and I can't figure out if this is a unit issue or a calculation issue?
Display the trajectory pos_arr contains the x and y coordinates in its columns. To display the whole trajectory, plt.plot(pos_arr[:,0], pos_arr[:,1]) can thus be used. I would prefer to use plt.plot(*pos_arr.T) as a shorter alternative. The line that displays the trajectory must be replaced by: plt.plot(*pos_arr.T, 'g', label = 'Earth trajectory') Change the timestep Here the timestep (in second) is chosen as 1/N, where N is the number of iterations. So, the total duration of the simulation is equal to timestep * N = 1 second ! For N=1000, you can instead try with timestep = 3600*12 (half-day), so that the total duration is a little less than 1.5 years. I suggest passing the duration as a parameter of the function earth_orbit and then setting timestep as duration / N. def earth_orbit(x0, y0, vx0, vy0, N=1000, duration=3.15e7): dt = duration / N ...
As said in the comments, this is not the Verlet algorithm, but the symplectic Euler algorithm. The difference is in the initialization, but in comparing against a more exact reference solution and with several step sizes, the difference in the orders, 2 vs. 1, will be quite visible. A short change to the time loop ensuring that the velocities are at the half-time steps as required for Leapfrog Verlet could look like this: def force(pos): return -G * mSun * mEar * pos_arr[k+1] / (np.linalg.norm(pos_arr[k+1])**3) #force calculation pos_arr[0] = (x0,y0) #set the intial positions in the array vel_arr[0] = (vx0,vy0) #set the initial velocities in the array vel_arr[0] += (force(pos_arr[0])/mEar) * (0.5*dt) #correct for velocity at half-time #Implement Verlet Algorithm for k in range (N-1): pos_arr[k+1] = pos_arr[k] + vel_arr[k] * dt #update positions vel_arr[k+1] = vel_arr[k] + (force(pos_arr[k+1])/mEar) * dt #update velocities
Adding random weighted point
Let's say I have a blank canvas with 2 red points in it. Is there an algorithm to randomly add a point in the canvas but in a way where it's more bias to the red points with a supplied radius? Here's a crude image as an example: Even though this question is for Python it really applies for any language.
Sure. Select first point or second point randomly, then generate some distribution with single scale parameter in polar coordinates, then shift by center point position. Select some reasonable radial distribution (gaussian in the code below, exponential or Cauchy might work as well) import math import random import matplotlib.pyplot as plt def select_point(): p = random.random() if p < 0.5: return 0 return 1 def sample_point(R): """ Sample point inpolar coordinates """ phi = 2.0 * math.pi * random.random() # angle r = R * random.gauss(0.0, 1.0) # might try different radial distribution, R*random.expovariate(1.0) return (r * math.cos(phi), r * math.sin(phi)) def sample(R, points): idx = select_point() x, y = sample_point(R) return (x + points[idx][0], y + points[idx][1]) R = 1.0 points = [(7.1, 3.3), (4.8, -1.4)] random.seed(12345) xx = [] yy = [] cc = [] xx.append(points[0][0]) xx.append(points[1][0]) yy.append(points[0][1]) yy.append(points[1][1]) cc.append(0.8) cc.append(0.8) for k in range(0, 50): x, y = sample(R, points) xx.append(x) yy.append(y) cc.append(0.3) plt.scatter(xx, yy, c=cc) plt.show() Picture
Numpy: Generate grid according to density function
linspace generates a linear space. How can I generate a grid using an arbitrary density function? Say, I would like to have a grid from 0 to 1, with 100 grid points, and where the density of points is given by (x - 0.5)**2 - how would I create such a grid in Python? That is, I want many grid-points where the function (x - 0.5)**2) is large, and few points where the function is small. I do not want a grid that has values according to this function.
For example like this: x = (np.linspace(0.5,1.5,100)-0.5)**2 The start and end values have to be chosen so that f(start) = 0 and f(end)=1.
In that case the following solution should work. Be sure that func is positive throughout the range... import numpy as np from matplotlib import pyplot as plt def func(x): return (x-0.5)**2 start = 0 end = 1 npoints = 100 x = np.linspace(start,end,npoints) fx = func(x) # take density (or intervals) as inverse of fx # g in [0,1] controls how much warping you want. # g = 0: fully warped # g = 1: linearly spaced g = 0 density = (1+g*(fx-1))/fx # sum the intervals to get new grid x_density = np.cumsum(density) # rescale to match old range x_density -= x_density.min() x_density/= x_density.max() x_density *= (end-start) x_density += start fx_density = func(x_density) plt.plot(x,fx,'ok',ms = 10,label = 'linear') plt.plot(x_density,fx_density,'or',ms = 10,label = 'warped') plt.legend(loc = 'upper center') plt.show()
Using FFT to find the center of mass under periodic boundary conditions
I would like to use the Fourier transform to find the center of a simulated entity under periodic boundary condition; periodic boundary conditions means, that whenever something exits through one side of the box, it is warped around to appear on the opposite side just like in the classic game asteroids. So what I have is for each time frame a matrix (Nx3) with N the number of points in xyz. what I want to do is determine the center of that cloud even if it all moved over the periodic boundary and is so to say stuck in between. My idea for an solution would now be do a (mass weigted) histogram of these points and then perform an FFT on that and use the phase of the first Fourier coefficient to determine where in the box the maximum would be. as a test case I have used import numpy as np Points_x = np.random.randn(10000) Box_min = -10 Box_max = 10 X = np.linspace( Box_min, Box_max, 100 ) ### make a Histogram of the points Histogram_Points = np.bincount( np.digitize( Points_x, X ), minlength=100 ) ### make an artifical shift over the periodic boundary Histogram_Points = np.r_[ Histogram_Points[45:], Histogram_Points[:45] ] So now I can use FFT since it expects a periodic function anyways. ## doing fft F = np.fft.fft(Histogram_Points) ## getting rid of everything but first harmonic F[2:] = 0. ## back transforming Fist_harmonic = np.fft.ifft(F) That way I get a sine wave with its maximum exactly where the maximum of the histogram is. Now I'd like to extract the position of the maximum not by taking the max function on the sine vector, but somehow it should be retrievable from the first (not the 0th) Fourier coefficient, since that should somehow contain the phase shift of the sine to have its maximum exactly at the maximum of the histogram. Indeed, plotting Cos_approx = cos( linspace(0,2*pi,100) * angle(F[1]) ) will give But I can't figure out how to get the position of the peak from this angle.
Using the FFT is overkill when all you need is one Fourier coefficent. Instead, you can simply compute the dot product of your data with w = np.exp(-2j*np.pi*np.arange(N) / N) where N is the number of points. (The time to compute all the Fourier coefficients with the FFT is O(N*log(N)). Computing just one coefficient is O(N).) Here's a script similar to yours. The data is put in y; the coordinates of the data points are in x. import numpy as np N = 100 # x coordinates of the data xmin = -10 xmax = 10 x = np.linspace(xmin, xmax, N, endpoint=False) # Generate data in y. n = 35 y = np.zeros(N) y[:n] = 1 - np.cos(np.linspace(0, 2*np.pi, n)) y[:n] /= 0.7 + 0.3*np.random.rand(n) m = 10 y = np.r_[y[m:], y[:m]] # Compute coefficent 1 of the discrete Fourier transform. w = np.exp(-2j*np.pi*np.arange(N) / N) F1 = y.dot(w) print "F1 =", F1 # Get the angle of F1 (in the interval [0,2*pi]). angle = np.angle(F1.conj()) if angle < 0: angle += 2*np.pi center_x = xmin + (xmax - xmin) * angle / (2*np.pi) print "center_x = ", center_x # Create the first sinusoidal mode for the plot. mode1 = (F1.real * np.cos(2*np.pi*np.arange(N)/N) - F1.imag*np.sin(2*np.pi*np.arange(N)/N))/np.abs(F1) import matplotlib.pyplot as plt plt.clf() plt.plot(x, y) plt.plot(x, mode1) plt.axvline(center_x, color='r', linewidth=1) plt.show() This generates the plot: To answer the question "Why F1.conj()?": The complex conjugate of F1 is used because of the minus sign in w = np.exp(-2j*np.pi*np.arange(N) / N) (which I used because it is a common convention). Since w can be written w = np.exp(-2j*np.pi*np.arange(N) / N) = cos(-2*pi*arange(N)/N) + 1j*sin(-2*pi*arange(N)/N) = cos(2*pi*arange(N)/N) - 1j*sin(2*pi*arange(N)/N) the dot product y.dot(w) is basically a projection of y onto cos(2*pi*arange(N)/N) (the real part of F1) and -sin(2*pi*arange(N)/N) (the imaginary part of F1). But when we figure out the phase of the maximum, it is based on the functions cos(...) and sin(...). Taking the complex conjugate accounts for the opposite sign of the sin() function. If w = np.exp(2j*np.pi*np.arange(N) / N) were used instead, the complex conjugate of F1 would not be needed.
You could calculate the circular mean directly on your data. When calculating the circular mean, your data is mapped to -pi..pi. This mapped data is interpreted as angle to a point on the unit circle. Then the mean value of x and y component is calculated. The next step is to calculate the resulting angle and map it back to the defined "box". import numpy as np import matplotlib.pyplot as plt Points_x = np.random.randn(10000)+1 Box_min = -10 Box_max = 10 Box_width = Box_max - Box_min #Maps Points to Box_min ... Box_max with periodic boundaries Points_x = (Points_x%Box_width + Box_min) #Map Points to -pi..pi Points_map = (Points_x - Box_min)/Box_width*2*np.pi-np.pi #Calc circular mean Pmean_map = np.arctan2(np.sin(Points_map).mean() , np.cos(Points_map).mean()) #Map back Pmean = (Pmean_map+np.pi)/(2*np.pi) * Box_width + Box_min #Plotting the result plt.figure(figsize=(10,3)) plt.subplot(121) plt.hist(Points_x, 100); plt.plot([Pmean, Pmean], [0, 1000], c='r', lw=3, alpha=0.5); plt.subplot(122,aspect='equal') plt.plot(np.cos(Points_map), np.sin(Points_map), '.'); plt.ylim([-1, 1]) plt.xlim([-1, 1]) plt.grid() plt.plot([0, np.cos(Pmean_map)], [0, np.sin(Pmean_map)], c='r', lw=3, alpha=0.5);
python optimize.leastsq: fitting a circle to 3d set of points
I am trying to use circle fitting code for 3D data set. I have modified it for 3D points just adding z-coordinate where necessary. My modification works fine for one set of points and works bad for another. Please look at the code, if it has some errors. import trig_items import numpy as np from trig_items import * from numpy import * from matplotlib import pyplot as p from scipy import optimize # Coordinates of the 3D points ##x = r_[36, 36, 19, 18, 33, 26] ##y = r_[14, 10, 28, 31, 18, 26] ##z = r_[0, 1, 2, 3, 4, 5] x = r_[ 2144.18908574, 2144.26880854, 2144.05552972, 2143.90303742, 2143.62520676, 2143.43628579, 2143.14005775, 2142.79919654, 2142.51436023, 2142.11240866, 2141.68564346, 2141.29333828, 2140.92596405, 2140.3475612, 2139.90848046, 2139.24661021, 2138.67384709, 2138.03313547, 2137.40301734, 2137.40908256, 2137.06611224, 2136.50943781, 2136.0553113, 2135.50313189, 2135.07049922, 2134.62098139, 2134.10459535, 2133.50838433, 2130.6600465, 2130.03537342, 2130.04047644, 2128.83522468, 2127.79827542, 2126.43513385, 2125.36700593, 2124.00350543, 2122.68564431, 2121.20709478, 2119.79047011, 2118.38417647, 2116.90063343, 2115.52685778, 2113.82246629, 2112.21159431, 2110.63180117, 2109.00713198, 2108.94434529, 2106.82777156, 2100.62343757, 2098.5090226, 2096.28787738, 2093.91550703, 2091.66075061, 2089.15316429, 2086.69753869, 2084.3002414, 2081.87590579, 2079.19141866, 2076.5394574, 2073.89128676, 2071.18786213] y = r_[ 725.74913818, 724.43874065, 723.15226506, 720.45950581, 717.77827954, 715.07048092, 712.39633862, 709.73267688, 707.06039438, 704.43405908, 701.80074596, 699.15371526, 696.5309022, 693.96109921, 691.35585501, 688.83496327, 686.32148661, 683.80286662, 681.30705568, 681.30530975, 679.66483676, 678.01922321, 676.32721779, 674.6667554, 672.9658024, 671.23686095, 669.52021535, 667.84999077, 659.19757984, 657.46179949, 657.45700508, 654.46901086, 651.38177517, 648.41739432, 645.32356976, 642.39034578, 639.42628453, 636.51107198, 633.57732055, 630.63825133, 627.75308356, 624.80162215, 622.01980232, 619.18814892, 616.37688894, 613.57400131, 613.61535723, 610.4724493, 600.98277781, 597.84782844, 594.75983001, 591.77946964, 588.74874068, 585.84525834, 582.92311166, 579.99564481, 577.06666417, 574.30782762, 571.54115037, 568.79760614, 566.08551098] z = r_[ 339.77146775, 339.60021095, 339.47645894, 339.47130963, 339.37216218, 339.4126132, 339.67942046, 339.40917728, 339.39500353, 339.15041461, 339.38959195, 339.3358209, 339.47764895, 339.17854867, 339.14624071, 339.16403926, 339.02308811, 339.27011082, 338.97684183, 338.95087698, 338.97321177, 339.02175448, 339.02543922, 338.88725411, 339.06942374, 339.0557553, 339.04414618, 338.89234303, 338.95572249, 339.00880416, 339.00413073, 338.91080374, 338.98214758, 339.01135789, 338.96393537, 338.73446188, 338.62784913, 338.72443217, 338.74880562, 338.69090173, 338.50765186, 338.49056867, 338.57353355, 338.6196255, 338.43754399, 338.27218569, 338.10587265, 338.43880881, 338.28962141, 338.14338705, 338.25784154, 338.49792568, 338.15572139, 338.52967693, 338.4594245, 338.1511823, 338.03711207, 338.19144663, 338.22022045, 338.29032321, 337.8623197 ] # coordinates of the barycenter xm = mean(x) ym = mean(y) zm = mean(z) ### Basic usage of optimize.leastsq def calc_R(xc, yc, zc): """ calculate the distance of each 3D points from the center (xc, yc, zc) """ return sqrt((x - xc) ** 2 + (y - yc) ** 2 + (z - zc) ** 2) def func(c): """ calculate the algebraic distance between the 3D points and the mean circle centered at c=(xc, yc, zc) """ Ri = calc_R(*c) return Ri - Ri.mean() center_estimate = xm, ym, zm center, ier = optimize.leastsq(func, center_estimate) ##print center xc, yc, zc = center Ri = calc_R(xc, yc, zc) R = Ri.mean() residu = sum((Ri - R)**2) print 'R =', R So, for the first set of x, y, z (commented in the code) it works well: the output is R = 39.0097846735. If I run the code with the second set of points (uncommented) the resulting radius is R = 108576.859834, which is almost straight line. I plotted the last one. The blue points is a given data set, the red ones is the arc of the resulting radius R = 108576.859834. It is obvious that the given data set has much smaller radius than the result. Here is another set of points. It is clear that the least squares does not work correctly. Please help me solving this issue. UPDATE Here is my solution: ### fit 3D arc into a set of 3D points ### ### output is the centre and the radius of the arc ### def fitArc3d(arr, eps = 0.0001): # Coordinates of the 3D points x = numpy.array([arr[k][0] for k in range(len(arr))]) y = numpy.array([arr[k][4] for k in range(len(arr))]) z = numpy.array([arr[k][5] for k in range(len(arr))]) # coordinates of the barycenter xm = mean(x) ym = mean(y) zm = mean(z) ### gradient descent minimisation method ### pnts = [[x[k], y[k], z[k]] for k in range(len(x))] meanP = Point(xm, ym, zm) # mean point Ri = [Point(*meanP).distance(Point(*pnts[k])) for k in range(len(pnts))] # radii to the points Rm = math.fsum(Ri) / len(Ri) # mean radius dR = Rm + 10 # difference between mean radii alpha = 0.1 c = meanP cArr = [] while dR > eps: cArr.append(c) Jx = math.fsum([2 * (x[k] - c[0]) * (Ri[k] - Rm) / Ri[k] for k in range(len(Ri))]) Jy = math.fsum([2 * (y[k] - c[1]) * (Ri[k] - Rm) / Ri[k] for k in range(len(Ri))]) Jz = math.fsum([2 * (z[k] - c[2]) * (Ri[k] - Rm) / Ri[k] for k in range(len(Ri))]) gradJ = [Jx, Jy, Jz] # find gradient c = [c[k] + alpha * gradJ[k] for k in range(len(c)) if len(c) == len(gradJ)] # find new centre point Ri = [Point(*c).distance(Point(*pnts[k])) for k in range(len(pnts))] # calculate new radii RmOld = Rm Rm = math.fsum(Ri) / len(Ri) # calculate new mean radius dR = abs(Rm - RmOld) # new difference between mean radii return Point(*c), Rm It is not very optimal code (I do not have time to fine tune it) but it works.
I guess the problem is the data and the corresponding algorithm. The least square method works fine if it produces a local parabolic minimum, such that a simple gradient method goes approximately direction minimum. Unfortunately, this is not necessarily the case for your data. You can check this by keeping some rough estimates for xc and yc fixed and plotting the sum of the squared residuals as a function of zc and R. I get a boomerang shaped minimum. Depending on your starting parameters you might end in one of the branches going away from the real minimum. Once in the valley this can be very flat such that you exceed the number of max iterations or get something that is accepted within the tolerance of the algorithm. As always, thinks are better the better your starting parameters. Unfortunately you have only a small arc of the circle, so that it is difficult to get better. I am not a specialist in Python, but I think that leastsq allows you to play with the Jacobian and Gradient Methods. Try to play with the tolerance as well. In short: the code looks basically fine to me, but your data is pathological and you have to adapt the code to that kind of data. There is a non-iterative solution in 2D from Karimäki, maybe you can adapt this method to 3D. You can also look at this. Sure you will find more literature. I just checked the data using a Simplex-Algorithm. The minimum is, as I said, not well behaved. See here some cuts of the residual function. Only in the xy-plane you get some reasonable behavior. The properties of the zr- and xr- plane make the finding process very difficult. So in the beginning the simplex algorithm finds several almost stable solutions. You can see them as flat steps in the graph below (blue x, purple y, yellow z, green R). At the end the algorithm has to walk down the almost flat but very stretched out valley, resulting in the final conversion of z and R. Nevertheless, I expect many regions that look like a solution if the tolerance is insufficient. With the standard tolerance of 10^-5 the algoritm stopped after approx 350 iterations. I had to set it to 10^-10 to get this solution, i.e. [1899.32, 741.874, 298.696, 248.956], which seems quite ok. Update As mentioned earlier, the solution depends on the working precision and requested accuracy. So your hand made gradient method works probably better as these values are different compared to the build-in least square fit. Nevertheless, this is my version making a two step fit. First I fit a plane to the data. In a next step I fit a circle within this plane. Both steps use the least square method. This time it works, as each step avoids critically shaped minima. (Naturally, the plane fit runs into problems if the arc segment becomes small and the data lies virtually on a straight line. But this will happen for all algorithms) from math import * from matplotlib import pyplot as plt from scipy import optimize import numpy as np from mpl_toolkits.mplot3d import Axes3D import pprint as pp dataTupel=zip(xs,ys,zs) #your data from above # Fitting a plane first # let the affine plane be defined by two vectors, # the zero point P0 and the plane normal n0 # a point p is member of the plane if (p-p0).n0 = 0 def distanceToPlane(p0,n0,p): return np.dot(np.array(n0),np.array(p)-np.array(p0)) def residualsPlane(parameters,dataPoint): px,py,pz,theta,phi = parameters nx,ny,nz =sin(theta)*cos(phi),sin(theta)*sin(phi),cos(theta) distances = [distanceToPlane([px,py,pz],[nx,ny,nz],[x,y,z]) for x,y,z in dataPoint] return distances estimate = [1900, 700, 335,0,0] # px,py,pz and zeta, phi #you may automize this by using the center of mass data # note that the normal vector is given in polar coordinates bestFitValues, ier = optimize.leastsq(residualsPlane, estimate, args=(dataTupel)) xF,yF,zF,tF,pF = bestFitValues point = [xF,yF,zF] normal = [sin(tF)*cos(pF),sin(tF)*sin(pF),cos(tF)] # Fitting a circle inside the plane #creating two inplane vectors sArr=np.cross(np.array([1,0,0]),np.array(normal))#assuming that normal not parallel x! sArr=sArr/np.linalg.norm(sArr) rArr=np.cross(sArr,np.array(normal)) rArr=rArr/np.linalg.norm(rArr)#should be normalized already, but anyhow def residualsCircle(parameters,dataPoint): r,s,Ri = parameters planePointArr = s*sArr + r*rArr + np.array(point) distance = [ np.linalg.norm( planePointArr-np.array([x,y,z])) for x,y,z in dataPoint] res = [(Ri-dist) for dist in distance] return res estimateCircle = [0, 0, 335] # px,py,pz and zeta, phi bestCircleFitValues, ier = optimize.leastsq(residualsCircle, estimateCircle,args=(dataTupel)) rF,sF,RiF = bestCircleFitValues print bestCircleFitValues # Synthetic Data centerPointArr=sF*sArr + rF*rArr + np.array(point) synthetic=[list(centerPointArr+ RiF*cos(phi)*rArr+RiF*sin(phi)*sArr) for phi in np.linspace(0, 2*pi,50)] [cxTupel,cyTupel,czTupel]=[ x for x in zip(*synthetic)] ### Plotting d = -np.dot(np.array(point),np.array(normal))# dot product # create x,y mesh xx, yy = np.meshgrid(np.linspace(2000,2200,10), np.linspace(540,740,10)) # calculate corresponding z # Note: does not work if normal vector is without z-component z = (-normal[0]*xx - normal[1]*yy - d)/normal[2] # plot the surface, data, and synthetic circle fig = plt.figure() ax = fig.add_subplot(211, projection='3d') ax.scatter(xs, ys, zs, c='b', marker='o') ax.plot_wireframe(xx,yy,z) ax.set_xlabel('X Label') ax.set_ylabel('Y Label') ax.set_zlabel('Z Label') bx = fig.add_subplot(212, projection='3d') bx.scatter(xs, ys, zs, c='b', marker='o') bx.scatter(cxTupel,cyTupel,czTupel, c='r', marker='o') bx.set_xlabel('X Label') bx.set_ylabel('Y Label') bx.set_zlabel('Z Label') plt.show() which give a radius of 245. This is close to what the other approach gave (249). So within error margins I get the same. The plotted result looks reasonable. Hope this helps.
Feel like you missed some constraints in your 1st version code. The implementation could be explained as fitting a sphere to 3d points. So that's why the 2nd radius for 2nd data list is almost straight line. It's thinking like you are giving it a small circle on a large sphere.