Given a line with coordinates 'start' and 'end' and the coordinates of a point 'pnt' find the shortest distance from pnt to the line. I have tried the below code.
import math
def dot(v,w):
x,y,z = v
X,Y,Z = w
return x*X + y*Y + z*Z
def length(v):
x,y,z = v
return math.sqrt(x*x + y*y + z*z)
def vector(b,e):
x,y,z = b
X,Y,Z = e
return (X-x, Y-y, Z-z)
def unit(v):
x,y,z = v
mag = length(v)
return (x/mag, y/mag, z/mag)
def distance(p0,p1):
return length(vector(p0,p1))
def scale(v,sc):
x,y,z = v
return (x * sc, y * sc, z * sc)
def add(v,w):
x,y,z = v
X,Y,Z = w
return (x+X, y+Y, z+Z)
def pnt2line(pnt, start, end):
line_vec = vector(start, end)
pnt_vec = vector(start, pnt)
line_len = length(line_vec)
line_unitvec = unit(line_vec)
pnt_vec_scaled = scale(pnt_vec, 1.0/line_len)
t = dot(line_unitvec, pnt_vec_scaled)
if t < 0.0:
t = 0.0
elif t > 1.0:
t = 1.0
nearest = scale(line_vec, t)
dist = distance(nearest, pnt_vec)
nearest = add(nearest, start)
return (dist, nearest)
The resolution can be explained by the figure, which shows the locus of the points at a given distance from a segment. It is made of two half circles and two line segments, which are separated by the two perpendiculars at the endpoints.
We can simplify the discussion by ensuring that the segment is in a canonical position, with endpoints (0, 0) and (L, 0). For any segment, we can apply a similarity transformation to bring it in the canonical position (see below), and move the target point accordingly.
Now the computation of the distance amounts to
X < 0 -> √[X² + Y²]
0 ≤ X ≤ L -> |Y|
L < X -> √[(X-L)² + Y²]
Subtract the coordinates of one endpoint from all points to bring the segment to the origin.
Compute the length L.
Normalize the vector to the second endpoint to obtain a unit vector, let U.
Transform the target point with X' = Ux.X + Uy.Y, Y' = Ux.Y - Uy.X.
Technical remark:
The geometric analysis proves that the output function is the square root of a piecewise quadratic function and it takes one or two comparisons to tell the active piece and this cannot be avoided. If I am right, the algebraic expressions cannot be much simpified.
Related
I have a point in 3D
p = [0,1,0]
and a list of line segments defined by their starting and ending co-ordinates.
line_starts = [[1,1,1], [2,2,2], [3,3,3]]
line_ends = [[5,1,3], [3,2,1], [3, 1, 1]]
I tried adapting the first two algorithms detailed over here in this post:
Find the shortest distance between a point and line segments (not line)
But the algorithms are either extremely slow for more than 1k points and line segments or do not work for 3 Dimensions. Is there an efficient way to compute the minimum distance from a point to a line segment, and return the co-ordinates of that point on the line segment?
For e.g.
I was able to adapt this code from the post linked above for example, but it is extremely slow.
import math
import numpy as np
def dot(v,w):
x,y,z = v
X,Y,Z = w
return x*X + y*Y + z*Z
def length(v):
x,y,z = v
return math.sqrt(x*x + y*y + z*z)
def vector(b,e):
x,y,z = b
X,Y,Z = e
return (X-x, Y-y, Z-z)
def unit(v):
x,y,z = v
mag = length(v)
return (x/mag, y/mag, z/mag)
def distance(p0,p1):
return length(vector(p0,p1))
def scale(v,sc):
x,y,z = v
return (x * sc, y * sc, z * sc)
def add(v,w):
x,y,z = v
X,Y,Z = w
return (x+X, y+Y, z+Z)
'''Given a line with coordinates 'start' and 'end' and the
coordinates of a point 'pnt' the proc returns the shortest
distance from pnt to the line and the coordinates of the
nearest point on the line.
1 Convert the line segment to a vector ('line_vec').
2 Create a vector connecting start to pnt ('pnt_vec').
3 Find the length of the line vector ('line_len').
4 Convert line_vec to a unit vector ('line_unitvec').
5 Scale pnt_vec by line_len ('pnt_vec_scaled').
6 Get the dot product of line_unitvec and pnt_vec_scaled ('t').
7 Ensure t is in the range 0 to 1.
8 Use t to get the nearest location on the line to the end
of vector pnt_vec_scaled ('nearest').
9 Calculate the distance from nearest to pnt_vec_scaled.
10 Translate nearest back to the start/end line.
Malcolm Kesson 16 Dec 2012'''
def pnt2line(array):
pnt = array[0]
start = array[1]
end = array[2]
line_vec = vector(start, end)
pnt_vec = vector(start, pnt)
line_len = length(line_vec)
line_unitvec = unit(line_vec)
pnt_vec_scaled = scale(pnt_vec, 1.0/line_len)
t = dot(line_unitvec, pnt_vec_scaled)
if t < 0.0:
t = 0.0
elif t > 1.0:
t = 1.0
nearest = scale(line_vec, t)
dist = distance(nearest, pnt_vec)
nearest = add(nearest, start)
return (round(dist, 3), [round(i, 3) for i in nearest])
def get_nearest_line(input_d):
'''
input_d is an array of arrays
Each subarray is [point, line_start, line_end]
The point must be the same for all sub_arrays
'''
op = np.array(list(map(pnt2line, input_d)))
ind = np.argmin(op[:, 0])
return ind, op[ind, 0], op[ind, 1]
if __name__ == '__main__':
p = [0,1,0]
line_starts = [[1,1,1], [2,2,2], [3,3,3]]
line_ends = [[5,1,3], [3,2,1], [3, 1, 1]]
input_d = [[p, line_starts[i], line_ends[i]] for i in range(len(line_starts))]
print(get_nearest_line(input_d))
Output:
(0, 1.414, [1.0, 1.0, 1.0])
Here,
(0 - the first line segment was closest,
1.414 - the distance to the line segment,
[1.0, 1.0, 1.0] - the point on the line segment closest to the given point)
The problem is the above code is extremely slow.
Further, I have about 10K points, and a fixed set of 10K line segments. For each of the points,
I have to find the closest line segment, and the point on the line segment which is the closest.
Right now it takes 30 mins to process 10K points.
Is there an efficient way to achieve this?
You can try this:
import numpy as np
def dot(v, w):
"""
row-wise dot product of 2-dimensional arrays
"""
return np.einsum('ij,ij->i', v, w)
def closest(line_starts, line_ends, p):
"""
find line segment closest to the point p
"""
# array of vectors from the start to the end of each line segment
se = line_ends - line_starts
# array of vectors from the start of each line segment to the point p
sp = p - line_starts
# array of vectors from the end of each line segment to p
ep = p - line_ends
# orthogonal projection of sp onto se
proj = (dot(sp, se) / dot(se, se)).reshape(-1, 1) * se
# orthogonal complement of the projection
n = sp - proj
# squares of distances from the start of each line segment to p
starts_d = dot(sp, sp)
# squares of distances from the end of each line segments to p
ends_d = dot(ep, ep)
# squares of distances between p and each line
lines_d = dot(n, n)
# If the point determined by the projection is inside
# the line segment, it is the point of the line segment
# closest to p; otherwhise the closest point is one of
# the enpoints. Determine which of these cases holds
# and compute the square of the distance to each line segment.
coeffs = dot(proj, se)
dist = np.select([coeffs < 0, coeffs < dot(se, se), True],
[starts_d, lines_d, ends_d])
# find the index of the closest line segment, its distance to p,
# and the point in this line segment closest to p
idx = np.argmin(dist)
min_dist = dist[idx]
if min_dist == starts_d[idx]:
min_point = line_starts[idx]
elif min_dist == ends_d[idx]:
min_point = line_ends[idx]
else:
min_point = line_starts[idx] + proj[idx]
return idx, min_dist**0.5, min_point
Example:
line_starts = np.array([[1,1,1], [2,2,2], [3,3,3]])
line_ends = np.array([[5,1,3], [3,2,1], [3, 1, 1]])
p = np.array([0,1,0])
idx, dist, point = closest(line_starts, line_ends, p)
print(f"index = {idx}\ndistance = {dist}\nclosest point = {point}")
It gives:
index = 0
distance = 1.4142135623730951
closest point = [1 1 1]
Since in your case line segments are fixed, and only points are changing, this can be optimized to this situation.
I have multiple GPS Coordinate Points I would like to create a line from in python. The points aren't in a straight line but are exact enough to connect them with straight lines.
I know how to connect one point to another, but not how I would connect multiple of these singular lines to a longer one and then get a point based on the percentage of the whole line.
I use this code to get the percentage of a singular line:
def pointAtPercent(p0, p1, percent):
if p0.x != p1.x:
x = p0.x + percent * (p1.x - p0.x)
else:
x = p0.x;
if p0.y != p1.y:
y = p0.y + percent * (p1.y - p0.y)
else:
y = p0.y
p = point()
p.x = x
p.y = y
return p;
Here is an example list:
[ 10.053417,
53.555737,
10.053206,
53.555748,
10.052497,
53.555763,
10.051125,
53.555757,
10.049193,
53.555756,
10.045511,
53.555762,
10.044863,
53.555767,
10.044319,
53.555763,
10.043685,
53.555769,
10.042765,
53.555759,
10.04201,
53.555756,
10.041919,
53.555757,
10.041904,
53.555766
]
You could create a list of x,y pairs and access the GPS point based on the length of the list:
points = [
10.053417, 53.555737, 10.053206, 53.555748, 10.052497, 53.555763, 10.051125,
53.555757, 10.049193, 53.555756, 10.045511, 53.555762, 10.044863, 53.555767,
10.044319, 53.555763, 10.043685, 53.555769, 10.042765, 53.555759, 10.04201,
53.555756, 10.041919, 53.555757, 10.041904, 53.555766
]
points = [(points[i], points[i + 1]) for i in range(0, len(points) - 1, 2)]
def pointAtPercent(points, percent):
lstIndex = int(
len(points) / 100. * percent
) # possibly creates some rounding issues !
print(points[lstIndex - 1])
pointAtPercent(points, 10)
pointAtPercent(points, 27.5)
pointAtPercent(points, 50)
pointAtPercent(points, 100)
Out:
(10.053417, 53.555737)
(10.052497, 53.555763)
(10.045511, 53.555762)
(10.041904, 53.555766)
The basic algorithm is this:
Determine the lengths of each segment. You are in spherical polar coordinates (assuming the Earth is a sphere, which it isn't (see WGS 84 if you need more precision)), so you can do this.
def great_circle_distance(lat_0, lon_0, lat_1, lon_1):
return math.acos(
math.sin(lat_0) * math.sin(lat_1)
+ math.cos(lat_0) * math.cos(lat_1) * math.cos(lon_1 - lon_0)
)
radian_points = [(math.radians(p.x), math.radians(p.y)) for p in points]
lengths = [
great_circle_distance(*p0, *p1) for p0, p1 in zip(radian_points, radian_points[1:])
]
path_length = sum(lengths)
Given a percentage, you can work out how far along the path it is.
distance_along = percentage * path_length
Find the index of the correct segment.
# Inefficient but easy (consider bisect.bisect with a stored list of sums)
index = max(
next(i for i in range(len(lengths) + 1) if sum(lengths[:i]) >= distance_along) - 1,
0,
)
Then use your original algorithm.
point = pointAtPercent(
points[index],
points[index + 1],
(distance_along - sum(lengths[:index])) / lengths[index],
)
I have a set of objects (triangles) in the 2D-Plane and I want to separate them by a line into two sets of about the same size.
Because normally the line will intersect with some triangles, I get three sets. One on the left side, one on the right side and one with collisions with the line.
I now want to find a good line. I figured out a cost function:
cost=-min(len(set_left), len(set_right))
Unfortunately, I can't think of a nice algorithm to solve this.
I have written a python exaple to show my problem:
(I use the real part for x and the imaginary part for the y coordinate)
import scipy.optimize
import numpy as np
def scalar_prod (v1, v2):
return v1.real*v2.real + v1.imag*v2.imag
def intersect_line_triangle (line, triangle):
point = line[0]
dir_ = line[1]
# calculate normal vector
n_vec = 1j * dir_
# Calculate signed distance of each point
dist = tuple(scalar_prod(n_vec, p-point) for p in triangle)
if all(d > 0 for d in dist):
return 1 # right
if all(d < 0 for d in dist):
return -1 # left
return 0 # intersecting
def split_triangles_by_line (triangles, line):
out = {-1: [], 0:[], 1:[]}
for tri in triangles:
out[intersect_line_triangle(line,tri)].append(tri)
return out
def calc_cost (triangles, line):
split = split_triangles_by_line(triangles, line)
cost = -(min(len(split[-1]), len(split[1])))
return cost
def calc_line (triangles, method='Powell'):
# TODO: think about a good algorithm!
center_point = sum(sum(tri) for tri in triangles) / (len(triangles)*3)
init_point = center_point
fun = lambda args: calc_cost(triangles, (args[0] + 1j* args[1], np.exp(1j*args[2])))
res = scipy.optimize.minimize(fun, [init_point.real, init_point.imag, np.pi/2], method=method)
res_line = (res.x[0]+ 1j*res.x[1], np.exp(1j*res.x[2]))
return res_line
triangles = [(0, 3j, 2), (4, 2+2j, 6+2j),
(4j, 3+4j, 3+7j), (4+3j, 5+3j, 4+10j),
(-1+5j, -1+8j, 3+9j)]
line = calc_line(triangles)
sep_triangles = split_triangles_by_line(triangles, line)
print("The resulting line is {} + {} * t".format(line[0], line[1]))
print("The triangles are separated:\nleft: {}\nright: {}\nintersected: {}".format(sep_triangles[-1], sep_triangles[1], sep_triangles[0]))
print("The cost is {}".format(calc_cost(triangles, line)))
I want to replace the optimizer part by some efficient algorithm. I guess, that computer graphic experts may use similar things.
Thanks in advance!
Is there's a library or a way to calculate the center point for several geolocations points?
This is my list of geolocations based in New York and want to find the approximate midpoint geolocation
L = [
(-74.2813611,40.8752222),
(-73.4134167,40.7287778),
(-74.3145014,40.9475244),
(-74.2445833,40.6174444),
(-74.4148889,40.7993333),
(-73.7789256,40.6397511)
]
After the comments I received and comment from HERE
With coordinates that close to each other, you can treat the Earth as being locally flat and simply find the centroid as though they were planar coordinates. Then you would simply take the average of the latitudes and the average of the longitudes to find the latitude and longitude of the centroid.
lat = []
long = []
for l in L :
lat.append(l[0])
long.append(l[1])
sum(lat)/len(lat)
sum(long)/len(long)
-74.07461283333332, 40.76800886666667
Based on: https://gist.github.com/tlhunter/0ea604b77775b3e7d7d25ea0f70a23eb
Assume you have a pandas DataFrame with latitude and longitude columns, the next code will return a dictionary with the mean coordinates.
import math
x = 0.0
y = 0.0
z = 0.0
for i, coord in coords_df.iterrows():
latitude = math.radians(coord.latitude)
longitude = math.radians(coord.longitude)
x += math.cos(latitude) * math.cos(longitude)
y += math.cos(latitude) * math.sin(longitude)
z += math.sin(latitude)
total = len(coords_df)
x = x / total
y = y / total
z = z / total
central_longitude = math.atan2(y, x)
central_square_root = math.sqrt(x * x + y * y)
central_latitude = math.atan2(z, central_square_root)
mean_location = {
'latitude': math.degrees(central_latitude),
'longitude': math.degrees(central_longitude)
}
Considering that you are using signed degrees format (more), simple averaging of latitude and longitudes would create problems for even small regions near to antimeridian (i.e. + or - 180-degree longitude) due to discontinuity of longitude value at this line (sudden jump between -180 to 180).
Consider two locations whose longitudes are -179 and 179, their mean would be 0, which is wrong.
This link can be useful, first convert lat/lon into an n-vector, then find average. A first stab at converting the code into Python
is below
import numpy as np
import numpy.linalg as lin
E = np.array([[0, 0, 1],
[0, 1, 0],
[-1, 0, 0]])
def lat_long2n_E(latitude,longitude):
res = [np.sin(np.deg2rad(latitude)),
np.sin(np.deg2rad(longitude)) * np.cos(np.deg2rad(latitude)),
-np.cos(np.deg2rad(longitude)) * np.cos(np.deg2rad(latitude))]
return np.dot(E.T,np.array(res))
def n_E2lat_long(n_E):
n_E = np.dot(E, n_E)
longitude=np.arctan2(n_E[1],-n_E[2]);
equatorial_component = np.sqrt(n_E[1]**2 + n_E[2]**2 );
latitude=np.arctan2(n_E[0],equatorial_component);
return np.rad2deg(latitude), np.rad2deg(longitude)
def average(coords):
res = []
for lat,lon in coords:
res.append(lat_long2n_E(lat,lon))
res = np.array(res)
m = np.mean(res,axis=0)
m = m / lin.norm(m)
return n_E2lat_long(m)
n = lat_long2n_E(30,20)
print (n)
print (n_E2lat_long(np.array(n)))
# find middle of france and libya
coords = [[30,20],[47,3]]
m = average(coords)
print (m)
I would like to improve on the #BBSysDyn'S answer.
The average calculation can be biased if you are calculating the center of a polygon with extra vertices on one side. Therefore the average function can be replaced with centroid calculation explained here
def get_centroid(points):
x = points[:,0]
y = points[:,1]
# Solving for polygon signed area
A = 0
for i, value in enumerate(x):
if i + 1 == len(x):
A += (x[i]*y[0] - x[0]*y[i])
else:
A += (x[i]*y[i+1] - x[i+1]*y[i])
A = A/2
#solving x of centroid
Cx = 0
for i, value in enumerate(x):
if i + 1 == len(x):
Cx += (x[i]+x[0]) * ( (x[i]*y[0]) - (x[0]*y[i]) )
else:
Cx += (x[i]+x[i+1]) * ( (x[i]*y[i+1]) - (x[i+1]*y[i]) )
Cx = Cx/(6*A)
#solving y of centroid
Cy = 0
for i , value in enumerate(y):
if i+1 == len(x):
Cy += (y[i]+y[0]) * ( (x[i]*y[0]) - (x[0]*y[i]) )
else:
Cy += (y[i]+y[i+1]) * ( (x[i]*y[i+1]) - (x[i+1]*y[i]) )
Cy = Cy/(6*A)
return Cx, Cy
Note: If it is a polygon or more than 2 points, they must be listed in order that the polygon or shape would be drawn.
i'm currently incredibly stuck on what isn't working in my code and have been staring at it for hours. I have created some functions to approximate the solution to the laplace equation adaptively using the finite element method then estimate it's error using the dual weighted residual. The error function should give a vector of errors (one error for each element), i then choose the biggest errors, add more elements around them, solve again and then recheck the error; however i have no idea why my error estimate isn't changing!
My first 4 functions are correct but i will include them incase someone wants to try the code:
def Poisson_Stiffness(x0):
"""Finds the Poisson equation stiffness matrix with any non uniform mesh x0"""
x0 = np.array(x0)
N = len(x0) - 1 # The amount of elements; x0, x1, ..., xN
h = x0[1:] - x0[:-1]
a = np.zeros(N+1)
a[0] = 1 #BOUNDARY CONDITIONS
a[1:-1] = 1/h[1:] + 1/h[:-1]
a[-1] = 1/h[-1]
a[N] = 1 #BOUNDARY CONDITIONS
b = -1/h
b[0] = 0 #BOUNDARY CONDITIONS
c = -1/h
c[N-1] = 0 #BOUNDARY CONDITIONS: DIRICHLET
data = [a.tolist(), b.tolist(), c.tolist()]
Positions = [0, 1, -1]
Stiffness_Matrix = diags(data, Positions, (N+1,N+1))
return Stiffness_Matrix
def NodalQuadrature(x0):
"""Finds the Nodal Quadrature Approximation of sin(pi x)"""
x0 = np.array(x0)
h = x0[1:] - x0[:-1]
N = len(x0) - 1
approx = np.zeros(len(x0))
approx[0] = 0 #BOUNDARY CONDITIONS
for i in range(1,N):
approx[i] = math.sin(math.pi*x0[i])
approx[i] = (approx[i]*h[i-1] + approx[i]*h[i])/2
approx[N] = 0 #BOUNDARY CONDITIONS
return approx
def Solver(x0):
Stiff_Matrix = Poisson_Stiffness(x0)
NodalApproximation = NodalQuadrature(x0)
NodalApproximation[0] = 0
U = scipy.sparse.linalg.spsolve(Stiff_Matrix, NodalApproximation)
return U
def Dualsolution(rich_mesh,qoi_rich_node): #BOUNDARY CONDITIONS?
"""Find Z from stiffness matrix Z = K^-1 Q over richer mesh"""
K = Poisson_Stiffness(rich_mesh)
Q = np.zeros(len(rich_mesh))
Q[qoi_rich_node] = 1.0
Z = scipy.sparse.linalg.spsolve(K,Q)
return Z
My error indicator function takes in an approximation Uh, with the mesh it is solved over, and finds eta = (f - Bu)z.
def Error_Indicators(Uh,U_mesh,Z,Z_mesh,f):
"""Take in U, Interpolate to same mesh as Z then solve for eta vector"""
u_inter = interp1d(U_mesh,Uh) #Interpolation of old mesh
U2 = u_inter(Z_mesh) #New function u for the new mesh to use in
Bz = Poisson_Stiffness(Z_mesh)
Bz = Bz.tocsr()
eta = np.empty(len(Z_mesh))
for i in range(len(Z_mesh)):
for j in range(len(Z_mesh)):
eta[i] += (f[i] - Bz[i,j]*U2[j])
for i in range(len(Z)):
eta[i] = eta[i]*Z[i]
return eta
My next function seems to adapt the mesh very well to the given error indicator! Just no idea why the indicator seems to stay the same regardless?
def Mesh_Refinement(base_mesh,tolerance,refinement,z_mesh,QOI_z_mesh):
"""Solve for U on a normal mesh, Take in Z, Find error indicators, adapt. OUTPUT NEW MESH"""
New_mesh = base_mesh
Z = Dualsolution(z_mesh,QOI_z_mesh) #Solve dual solution only once
f = np.empty(len(z_mesh))
for i in range(len(z_mesh)):
f[i] = math.sin(math.pi*z_mesh[i])
U = Solver(New_mesh)
eta = Error_Indicators(U,base_mesh,Z,z_mesh,f)
while max(abs(k) for k in eta) > tolerance:
orderedeta = np.sort(eta) #Sort error indicators LENGTH 40
biggest = np.flipud(orderedeta[int((1-refinement)*len(eta)):len(eta)])
position = np.empty(len(biggest))
ratio = float(len(New_mesh))/float(len(z_mesh))
for i in range(len(biggest)):
position[i] = eta.tolist().index(biggest[i])*ratio #GIVES WHAT NUMBER NODE TO REFINE
refine = np.zeros(len(position))
for i in range(len(position)):
refine[i] = math.floor(position[i])+0.5 #AT WHAT NODE TO PUT NEW ELEMENT 5.5 ETC
refine = np.flipud(sorted(set(refine)))
for i in range(len(refine)):
New_mesh = np.insert(New_mesh,refine[i]+0.5,(New_mesh[refine[i]+0.5]+New_mesh[refine[i]-0.5])/2)
U = Solver(New_mesh)
eta = Error_Indicators(U,New_mesh,Z,z_mesh,f)
print eta
An example input for this would be:
Mesh_Refinement(np.linspace(0,1,3),0.1,0.2,np.linspace(0,1,60),20)
I understand there is alot of code here but i am at a loss, i have no idea where to turn!
Please consider this piece of code from def Error_Indicators:
eta = np.empty(len(Z_mesh))
for i in range(len(Z_mesh)):
for j in range(len(Z_mesh)):
eta[i] = (f[i] - Bz[i,j]*U2[j])
Here you override eta[i] each j iteration, so the inner cycle proves useless and you can go directly to the last possible j. Did you mean to find a sum of the (f[i] - Bz[i,j]*U2[j]) series?
eta = np.empty(len(Z_mesh))
for i in range(len(Z_mesh)):
for j in range(len(Z_mesh)):
eta[i] += (f[i] - Bz[i,j]*U2[j])