Numpy lines intersects circles - python

I have n lines and m circles.
I have a [n,2] numpy array of line start points:
[x1_1,y1_1],
[x1_2,y1_2],
...
[x1_n,y1_n]
And a [n,2] numpy array of line end points:
[x2_1,y2_1],
[x2_2,y2_2],
...
[x2_n,y2_n]
And an [m,2] numpy array of circle centers:
[cx_1,cy_1],
...
[cx_m,cy_m]
And an [m,1] numpy array of circle radii:
[cr_1...cr_m]
I would like to efficiently get an [n,m] numpy array where array[i,j] is True if line i intersects circle j.
In general I would take the normalised perpendicular vector to each line and take the dot product of that with each (xi,yi) - (cx_j,cy_y) and ask if it's less than cr_i; but I also have to check whether that implied point is on the line and check each end individually if not. I'm wondering if there is a more elegant solution.

Ok, assume to have these shape
start = (np.random.random((3,2))-.5)*5
end = (np.random.random((3,2))-.5)*5
center = (np.random.random((4,2))-.5)*5
radius = np.random.random((4,1))*3
for each center we can compute the distance from the three lines by:
D = np.array([
np.linalg.norm(np.cross(end-start, start-c).reshape(-1,1),axis=1)/np.linalg.norm((end-start).reshape(-1,2), axis=1)
for c in center
]).T
D[i,j] will be the distance between line i (in rows) and center j (in clumns).
Now we can simply compare this distances to the radius distances with:
I = (d<radius.repeat(len(start), axis=1).T)
I is a matrix of the same shape of D; I[i,j] is True if the distance between the line i and the center j is lower than the radius j (and so if the line i intersect the circle j) and False otherwise.
I know it is not very elegant, but I hope it can be useful.

I can't think of a substantially simpler algorithm than the one outlined in the question. As far as the implementation is concerned, it is easy to make these computations using shapely.
First, lets generate and plot some sample data:
from itertools import product
import matplotlib.pyplot as plt
from matplotlib.patches import Circle
import matplotlib.colors as mcolors
from shapely.geometry import LineString, Point, MultiLineString
import numpy as np
import pandas as pd
# generate data
rng = np.random.default_rng(123)
start = (rng.random((3, 2)) - .5) * 5
end = (rng.random((3, 2)) - .5) * 5
center = (rng.random((4, 2)) - .5) * 5
radius = rng.random((4, 1)) * 3
# plot lines and circles
fig, ax = plt.subplots()
fig.set_size_inches(8, 8)
ax.set_aspect('equal')
colors = list(mcolors.TABLEAU_COLORS.keys())
for i, ends in enumerate(zip(start, end)):
ax.plot(*zip(*ends), label=f"line {i}")
for i, (c, r) in enumerate(zip(center, radius)):
ax.add_patch(Circle(c, r, fill=False, ec=colors[i], label=f"circle {i}"))
plt.legend()
plt.show()
This gives:
Next, compute the array of intersections, with rows corresponding to lines and columns corresponding to circles:
lines = [LineString(ends) for ends in list(zip(start, end))]
circles = [Point(c).buffer(r).boundary for c, r in zip(center, radius)]
out = np.empty((len(lines), len(circles)), dtype=bool)
for i, (l, c) in enumerate(product(lines, circles)):
out[np.unravel_index(i, out.shape)] = l.intersects(c)
#convert to a dataframe for better display
df = pd.DataFrame(out)
df.index.name = 'lines'
df.columns.name = 'circles'
print(df)
The result:
circles 0 1 2 3
lines
0 True False True True
1 False False False True
2 False False False False

Related

filter simplices out of scipy.spatial.Delaunay

Short version:
Is it possible to create a new scipy.spatial.Delaunay object with a subset of the triangles (2D data) from an existing object?
The goal would be to use the find_simplex method on the new object with filtered out simplices.
Similar but not quite the same
matplotlib contour/contourf of **concave** non-gridded data
How to deal with the (undesired) triangles that form between the edges of my geometry when using Triangulation in matplotlib
Long version:
I am looking at lat-lon data that I regrid with scipy.interpolate.griddata like in the pseudo-code below:
import numpy as np
from scipy.interpolate import griddata
from scipy.spatial import Delaunay
from scipy.interpolate.interpnd import _ndim_coords_from_arrays
#lat shape (a,b): 2D array of latitude values
#lon shape (a,b): 2D array of longitude values
#x shape (a,b): 2D array of variable of interest at lat and lon
# lat-lon data
nonan = ~np.isnan(lat)
flat_lat = lat[nonan]
flat_lon = lon[nonan]
flat_x = x[nonan]
# regular lat-lon grid for regridding
lon_ar = np.arange(loni,lonf,resolution)
lat_ar = np.arange(lati,latf,resolution)
lon_grid, lat_grid = np.meshgrid(lon_ar,lat_ar)
# regrid
x_grid = griddata((flat_lon,flat_lat),flat_x,(lon_grid,lat_grid), method='nearest')
# filter out extrapolated values
cloud_points = _ndim_coords_from_arrays((flat_lon,flat_lat))
regrid_points = _ndim_coords_from_arrays((lon_grid.ravel(),lat_grid.ravel()))
tri = Delaunay(cloud_points)
outside_hull = tri.find_simplex(regrid_points) < 0
x_grid[outside_hull.reshape(x_grid.shape)] = np.nan
# filter out large triangles ??
# it would be easy if I could "subset" tri into a new scipy.spatial.Delaunay object
# new_tri = ??
# outside_hull = new_tri.find_simplex(regrid_points) < 0
The problem is that the convex hull has low quality (very large, shown in blue in example below) triangles that I would like to filter out as they don't represent the data well. I know how to filter them out in input points, but not in the regridded output. Here is the filter function:
def filter_large_triangles(
points: np.ndarray, tri: Optional[Delaunay] = None, coeff: float = 2.0
):
"""
Filter out triangles that have an edge > coeff * median(edge)
Inputs:
tri: scipy.spatial.Delaunay object
coeff: triangles with an edge > coeff * median(edge) will be filtered out
Outputs:
valid_slice: boolean array that selects "normal" triangles
"""
if tri is None:
tri = Delaunay(points)
edge_lengths = np.zeros(tri.vertices.shape)
seen = {}
# loop over triangles
for i, vertex in enumerate(tri.vertices):
# loop over edges
for j in range(3):
id0 = vertex[j]
id1 = vertex[(j + 1) % 3]
# avoid calculating twice for non-border edges
if (id0,id1) in seen:
edge_lengths[i, j] = seen[(id0,id1)]
else:
edge_lengths[i, j] = np.linalg.norm(points[id1] - points[id0])
seen[(id0,id1)] = edge_lengths[i, j]
median_edge = np.median(edge_lengths.flatten())
valid_slice = np.all(edge_lengths < coeff * median_edge, axis=1)
return valid_slice
The bad triangles are shown in blue below:
import matplotlib.pyplot as plt
no_large_triangles = filter_large_triangles(cloud_points,tri)
fig,ax = plt.subplot()
ax.triplot(points[:,0],points[:,1],tri.simplices,c='blue')
ax.triplot(points[:,0],points[:,1],tri.simplices[no_large_triangles],c='green')
plt.show()
Is it possible to create a new scipy.spatial.Delaunay object with only the no_large_triangles simplices? The goal would be to use the find_simplex method on that new object to easily filter out points.
As an alternative how could I find the indices of points in regrid_points that fall inside the blue triangles? (tri.simplices[~no_large_triangles])
So it is possible to modify the Delaunay object for the purpose of using find_simplex on a subset of simplices, but it seems only with the bruteforce algorithm.
# filter out extrapolated values
cloud_points = _ndim_coords_from_arrays((flat_lon,flat_lat))
regrid_points = _ndim_coords_from_arrays((lon_grid.ravel(),lat_grid.ravel()))
tri = Delaunay(cloud_points)
outside_hull = tri.find_simplex(regrid_points) < 0
# filter out large triangles
large_triangles = ~filter_large_triangles(cloud_points,tri)
large_triangle_ids = np.where(large_triangles)[0]
subset_tri = tri # this doesn't preserve tri, effectively just a renaming
# the _find_simplex_bruteforce method only needs the simplices and neighbors
subset_tri.nsimplex = large_triangle_ids.size
subset_tri.simplices = tri.simplices[large_triangles]
subset_tri.neighbors = tri.neighbors[large_triangles]
# update neighbors
for i,triangle in enumerate(subset_tri.neighbors):
for j,neighbor_id in enumerate(triangle):
if neighbor_id in large_triangle_ids:
# reindex the neighbors to match the size of the subset
subset_tri.neighbors[i,j] = np.where(large_triangle_ids==neighbor_id)[0]
elif neighbor_id>=0 and (neighbor_id not in large_triangle_ids):
# that neighbor was a "normal" triangle that should not exist in the subset
subset_tri.neighbors[i,j] = -1
inside_large_triangles = subset_tri.find_simplex(regrid_points,bruteforce=True) >= 0
invalid_slice = np.logical_or(outside_hull,inside_large_triangles)
x_grid[invalid_slice.reshape(x_grid.shape)] = np.nan
Showing that the new Delaunay object has only the subset of large triangles
import matplotlib.pyplot as plt
fig,ax = plt.subplot()
ax.triplot(cloud_points[:,0],cloud_points[:,1],subset_tri.simplices,color='red')
plt.show()
Plotting x_grid with pcolormesh before the filtering for large triangles (zoomed in the blue circle above):
After the filtering:

finding intersection of discs in 3D

I have a data set of discs in 3D space. Each disc is defined by it's center and radius and Strike and Dip(Strike and Dip are the way geoscientists use for defining planes in 3D). I convert Strike and Dip to normal vector. As a result, the discs can be represented using center, radius, and normal vector.
I want to find out how many intersections each disc has with other discs. The way I'm approaching this is as follows:
for each 2 discs:
r1 and r2 are radius of the first and second discs and c1, c2 are the center points
check |c1-c2|<r1+r2
if (1) holds true, check if the normal vectors are parallel.
if parallel -> see if discs are in the same plane -> if yes: they intersect. if no: they don't intersect
if not parallel -> find the intersection line
find the minimum distance of c1 from the line (d1), find minimum distance of c2 from the line (d2). if d1<r1 and d2<r2 then the discs intersect. if not, they do not intersect.
Following I have attached all the functions that I'm using:
In the code: 'Easting' stands for X, 'Northing' stands for Y, 'Depth' stands for Z. I also find the minimum value in each of easting, northing and depth and subtract these values from their respective columns. I assume this will not affect the end result as it's only a translation.
import numpy as np
import pandas as pd
from numpy import radians, cos, sin
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import art3d
from mpl_toolkits.mplot3d import proj3d
from matplotlib.patches import Circle
from itertools import product
#################################functions##############################3
def plane_intersect(a, b):
"""
a, b 4-tuples/lists of A,B,C,D where
Ax + By +Cz + D = 0
A,B,C,D in order
output: 2 points on line of intersection, np.arrays, shape (3,)
"""
a_vec, b_vec = np.array(a[:3]), np.array(b[:3])
aXb_vec = np.cross(a_vec, b_vec)
A = np.array([a_vec, b_vec, aXb_vec])
d = np.array([-a[3], -b[3], 0.]).reshape(3,1)
# could add np.linalg.det(A) == 0 test to prevent linalg.solve throwing error
p_inter = np.linalg.solve(A, d).T
return p_inter[0], (p_inter + aXb_vec)[0]
def dist_point_from_line(p, q, rs):
"""
p, q, rs 3-D points in space where
p,q are two points on a line and rs is the point of interest
output: minimum distance (perpendicular) of point rs from the line defined by p, q
"""
p=np.array(p)
q=np.array(q)
rs=np.array(rs)
x = p-q
return np.linalg.norm(
np.outer(np.dot(rs-q, x)/np.dot(x, x), x)+q-rs,
axis=1)
def plane_parameters(S,D,x,y,z):
"""
S,D Strike and Dip of a plane in degrees
x,y,z Easting, Northin, Depth of the event(center of the disc)
output: parameters of the plane containing the disc
"""
A=-cos(radians(S))*sin(radians(D))
B=sin(radians(S))*sin(radians(D))
C=-cos(radians(D))
D=-((A*x)+(B*y)+(C*z))
return A,B,C,D
def disc_intersection(df):
for i in range(len(df)):
a = df.iloc[i][['Northing','Easting','Depth']]
r1= df.iloc[i][['SourceRo']][0]
Strike1=df.iloc[i][['Strike']][0]
Dip1= df.iloc[i][['Dip']][0]
for j in range(i+1,len(df)):
print()
print("i,j:",i,',',j)
b = df.iloc[j][['Northing','Easting','Depth']]
r2= df.iloc[j][['SourceRo']][0]
print('r1:',r1)
print('r2:',r2)
Strike2=df.iloc[j][['Strike']][0]
Dip2= df.iloc[j][['Dip']][0]
centers_distance = np.linalg.norm(a-b, ord=2)
print('centers_distance:', centers_distance)
#print("centers_distance:",centers_distance, type(centers_distance))
if centers_distance<= (r1+r2):
print("|c1-c2| <= r1+r2")
A1,B1,C1,D1 = plane_parameters(Strike1,Dip1,a['Easting'],a['Northing'],a['Depth'])
A2,B2,C2,D2 = plane_parameters(Strike2,Dip2,b['Easting'],b['Northing'],b['Depth'])
print("A1,B1,C1,D1:",A1,",",B1,",",C1,",",D1)
print("A2,B2,C2,D2:",A2,",",B2,",",C2,",",D2)
if not np.any(np.cross(np.array([A1,B1,C1]),np.array([A2,B2,C2]))):
print("normals are parallel!")
if A2*a['Easting']+B2*a['Northing']+C2*a['Depth']+D2==0:
print("discs are in the same plane and they do intersect!")
df.iloc[i,6]+=1
df.iloc[j,6]+=1
else:
print("discs are not in the same plane and hence they don't intersect!")
#planes are parallel and discs are not in the same plane => no intersection
pass
else:
print("planes not parallel!")
p, q= plane_intersect([A1,B1,C1,D1], [A2,B2,C2,D2])
print("c1 from intersection line:", dist_point_from_line(p, q, np.array(a)))
print("c2 from intersection line:", dist_point_from_line(p, q, np.array(b)))
if dist_point_from_line(p, q, np.array(a)) <= r1 and dist_point_from_line(p, q, np.array(b)) <= r2:
print("intersection found!!!")
df.iat[i,6]+=1
df.iat[j,6]+=1
else:
print("intersection NOT found!!!")
#although the planes intersect eachother, discs do not!
pass
else:
#There is no way that two discs placed farther than r1+r2 can intersect eachother.
print("|c1-c2| <= r1+r2 is not true!")
pass
return df
##########################################################
df =pd.DataFrame([[281017,1941326,8923,282.18,64.27,32.874017],
[281019,1941351,8902,47.51,60.60,35.826773],
[281107,1941313,8818,285.14,70.81,52.854332],
[281078,1941385,8865,42.60,40.11,35.170605]], columns=['Northing',
'Easting','Depth', 'Strike', 'Dip', 'SourceRo'])
df=df.loc[:,['Northing', 'Easting', 'Depth', 'Strike', 'Dip', 'SourceRo']]
df['num_intersections']=np.zeros(len(df))
north_min=df['Northing'].min()
east_min=df['Easting'].min()
depth_min=df['Depth'].min()
df['Northing']=df['Northing']-north_min
df['Easting']=df['Easting']-east_min
df['Depth']=df['Depth']-depth_min
north_min=df['Northing'].min()
north_max=df['Northing'].max()
east_min=df['Easting'].min()
east_max=df['Easting'].max()
depth_min=df['Depth'].min()
depth_max=df['Depth'].max()
r_max=df['SourceRo'].max()
north_min-=r_max
north_max+=r_max
east_min-=r_max
east_max+=r_max
depth_min-=r_max
depth_max+=r_max
new_df = disc_intersection(df)
The first two discs actually intersect. I have included an image of the 4 discs in the space for you reference.
I have noticed that d1 and d2 change when I add or remove a disc in the DataFrame, which is not mathematically correct, but I don't know which part of the code is problematic.

border/edge operations on numpy arrays

Suppose I have a 3D numpy array of nonzero values and "background" = 0. As an example I will take a sphere of random values:
array = np.random.randint(1, 5, size = (100,100,100))
z,y,x = np.ogrid[-50:50, -50:50, -50:50]
mask = x**2 + y**2 + z**2<= 20**2
array[np.invert(mask)] = 0
First, I would like to find the "border voxels" (all nonzero values that have a zero within their 3x3x3 neigbourhood). Second, I would like to replace all border voxels with the mean of their nonzero neighbours. So far I tried to use scipy's generic filter in the following way:
Function to apply at each element:
def borderCheck(values):
#check if the footprint center is on a nonzero value
if values[13] != 0:
#replace border voxels with the mean of nonzero neighbours
if 0 in values:
return np.sum(values)/np.count_nonzero(values)
else:
return values[13]
else:
return 0
Generic filter:
from scipy import ndimage
result = ndimage.generic_filter(array, borderCheck, footprint = np.ones((3,3,3)))
Is this a proper way to handle this problem? I feel that I am trying to reinvent the wheel here and that there must be a shorter, nicer way to achieve the result. Are there any other suitable (numpy, scipy ) functions that I can use?
EDIT
I messed one thing up: I would like to replace all border voxels with the mean of their nonzero AND non-border neighbours. For this, I tried to clean up the neighbours from ali_m's code (2D case):
#for each neighbour voxel, check whether it also appears in the border/edges
non_border_neighbours = []
for each in neighbours:
non_border_neighbours.append([i for i in each if nonzero_idx[i] not in edge_idx])
Now I can't figure out why non_border_neighbours comes back empty?
Furthermore, correct me if I am wrong but doesn't tree.query_ball_point with radius 1 adress only the 6 next neighbours (euclidean distance 1)? Should I set sqrt(3) (3D case) as radius to get the 26-neighbourhood?
I think it's best to start out with the 2D case first, since it can be visualized much more easily:
import numpy as np
from matplotlib import pyplot as plt
A = np.random.randint(1, 5, size=(100, 100)).astype(np.double)
y, x = np.ogrid[-50:50, -50:50]
mask = x**2 + y**2 <= 30**2
A[~mask] = 0
To find the edge pixels you could perform binary erosion on your mask, then XOR the result with your mask
# rank 2 structure with full connectivity
struct = ndimage.generate_binary_structure(2, 2)
erode = ndimage.binary_erosion(mask, struct)
edges = mask ^ erode
One approach to find the nearest non-zero neighbours of each edge pixel would be to use a scipy.spatial.cKDTree:
from scipy.spatial import cKDTree
# the indices of the non-zero locations and their corresponding values
nonzero_idx = np.vstack(np.where(mask)).T
nonzero_vals = A[mask]
# build a k-D tree
tree = cKDTree(nonzero_idx)
# use it to find the indices of all non-zero values that are at most 1 pixel
# away from each edge pixel
edge_idx = np.vstack(np.where(edges)).T
neighbours = tree.query_ball_point(edge_idx, r=1, p=np.inf)
# take the average value for each set of neighbours
new_vals = np.hstack(np.mean(nonzero_vals[n]) for n in neighbours)
# use these to replace the values of the edge pixels
A_new = A.astype(np.double, copy=True)
A_new[edges] = new_vals
Some visualisation:
fig, ax = plt.subplots(1, 3, figsize=(10, 4), sharex=True, sharey=True)
norm = plt.Normalize(0, A.max())
ax[0].imshow(A, norm=norm)
ax[0].set_title('Original', fontsize='x-large')
ax[1].imshow(edges)
ax[1].set_title('Edges', fontsize='x-large')
ax[2].imshow(A_new, norm=norm)
ax[2].set_title('Averaged', fontsize='x-large')
for aa in ax:
aa.set_axis_off()
ax[0].set_xlim(20, 50)
ax[0].set_ylim(50, 80)
fig.tight_layout()
plt.show()
This approach will also generalize to the 3D case:
B = np.random.randint(1, 5, size=(100, 100, 100)).astype(np.double)
z, y, x = np.ogrid[-50:50, -50:50, -50:50]
mask = x**2 + y**2 + z**2 <= 20**2
B[~mask] = 0
struct = ndimage.generate_binary_structure(3, 3)
erode = ndimage.binary_erosion(mask, struct)
edges = mask ^ erode
nonzero_idx = np.vstack(np.where(mask)).T
nonzero_vals = B[mask]
tree = cKDTree(nonzero_idx)
edge_idx = np.vstack(np.where(edges)).T
neighbours = tree.query_ball_point(edge_idx, r=1, p=np.inf)
new_vals = np.hstack(np.mean(nonzero_vals[n]) for n in neighbours)
B_new = B.astype(np.double, copy=True)
B_new[edges] = new_vals
Test against your version:
def borderCheck(values):
#check if the footprint center is on a nonzero value
if values[13] != 0:
#replace border voxels with the mean of nonzero neighbours
if 0 in values:
return np.sum(values)/np.count_nonzero(values)
else:
return values[13]
else:
return 0
result = ndimage.generic_filter(B, borderCheck, footprint=np.ones((3, 3, 3)))
print(np.allclose(B_new, result))
# True
I'm sure this isn't the most efficient way to do it, but it will still be significantly faster than using generic_filter.
Update
The performance could be further improved by reducing the number of points that are considered as candidate neighbours of the edge pixels/voxels:
# ...
# the edge pixels/voxels plus their immediate non-zero neighbours
erode2 = ndimage.binary_erosion(erode, struct)
candidate_neighbours = mask ^ erode2
nonzero_idx = np.vstack(np.where(candidate_neighbours)).T
nonzero_vals = B[candidate_neighbours]
# ...

How can an almost arbitrary plane in a 3D dataset be plotted by matplotlib?

There is an array containing 3D data of shape e.g. (64,64,64), how do you plot a plane given by a point and a normal (similar to hkl planes in crystallography), through this dataset?
Similar to what can be done in MayaVi by rotating a plane through the data.
The resulting plot will contain non-square planes in most cases.
Can those be done with matplotlib (some sort of non-rectangular patch)?
Edit: I almost solved this myself (see below) but still wonder how non-rectangular patches can be plotted in matplotlib...?
Edit: Due to discussions below I restated the question.
This is funny, a similar question I replied to just today. The way to go is: interpolation. You can use griddata from scipy.interpolate:
Griddata
This page features a very nice example, and the signature of the function is really close to your data.
You still have to somehow define the points on you plane for which you want to interpolate the data. I will have a look at this, my linear algebra lessons where a couple of years ago
I have the penultimate solution for this problem. Partially solved by using the second answer to Plot a plane based on a normal vector and a point in Matlab or matplotlib :
# coding: utf-8
import numpy as np
from matplotlib.pyplot import imshow,show
A=np.empty((64,64,64)) #This is the data array
def f(x,y):
return np.sin(x/(2*np.pi))+np.cos(y/(2*np.pi))
xx,yy= np.meshgrid(range(64), range(64))
for x in range(64):
A[:,:,x]=f(xx,yy)*np.cos(x/np.pi)
N=np.zeros((64,64))
"""This is the plane we cut from A.
It should be larger than 64, due to diagonal planes being larger.
Will be fixed."""
normal=np.array([-1,-1,1]) #Define cut plane here. Normal vector components restricted to integers
point=np.array([0,0,0])
d = -np.sum(point*normal)
def plane(x,y): # Get plane's z values
return (-normal[0]*x-normal[1]*y-d)/normal[2]
def getZZ(x,y): #Get z for all values x,y. If z>64 it's out of range
for i in x:
for j in y:
if plane(i,j)<64:
N[i,j]=A[i,j,plane(i,j)]
getZZ(range(64),range(64))
imshow(N, interpolation="Nearest")
show()
It's not the ultimate solution since the plot is not restricted to points having a z value, planes larger than 64 * 64 are not accounted for and the planes have to be defined at (0,0,0).
For the reduced requirements, I prepared a simple example
import numpy as np
import pylab as plt
data = np.arange((64**3))
data.resize((64,64,64))
def get_slice(volume, orientation, index):
orientation2slicefunc = {
"x" : lambda ar:ar[index,:,:],
"y" : lambda ar:ar[:,index,:],
"z" : lambda ar:ar[:,:,index]
}
return orientation2slicefunc[orientation](volume)
plt.subplot(221)
plt.imshow(get_slice(data, "x", 10), vmin=0, vmax=64**3)
plt.subplot(222)
plt.imshow(get_slice(data, "x", 39), vmin=0, vmax=64**3)
plt.subplot(223)
plt.imshow(get_slice(data, "y", 15), vmin=0, vmax=64**3)
plt.subplot(224)
plt.imshow(get_slice(data, "z", 25), vmin=0, vmax=64**3)
plt.show()
This leads to the following plot:
The main trick is dictionary mapping orienations to lambda-methods, which saves us from writing annoying if-then-else-blocks. Of course you can decide to give different names,
e.g., numbers, for the orientations.
Maybe this helps you.
Thorsten
P.S.: I didn't care about "IndexOutOfRange", for me it's o.k. to let this exception pop out since it is perfectly understandable in this context.
I had to do something similar for a MRI data enhancement:
Probably the code can be optimized but it works as it is.
My data is 3 dimension numpy array representing an MRI scanner. It has size [128,128,128] but the code can be modified to accept any dimensions. Also when the plane is outside the cube boundary you have to give the default values to the variable fill in the main function, in my case I choose: data_cube[0:5,0:5,0:5].mean()
def create_normal_vector(x, y,z):
normal = np.asarray([x,y,z])
normal = normal/np.sqrt(sum(normal**2))
return normal
def get_plane_equation_parameters(normal,point):
a,b,c = normal
d = np.dot(normal,point)
return a,b,c,d #ax+by+cz=d
def get_point_plane_proximity(plane,point):
#just aproximation
return np.dot(plane[0:-1],point) - plane[-1]
def get_corner_interesections(plane, cube_dim = 128): #to reduce the search space
#dimension is 128,128,128
corners_list = []
only_x = np.zeros(4)
min_prox_x = 9999
min_prox_y = 9999
min_prox_z = 9999
min_prox_yz = 9999
for i in range(cube_dim):
temp_min_prox_x=abs(get_point_plane_proximity(plane,np.asarray([i,0,0])))
# print("pseudo distance x: {0}, point: [{1},0,0]".format(temp_min_prox_x,i))
if temp_min_prox_x < min_prox_x:
min_prox_x = temp_min_prox_x
corner_intersection_x = np.asarray([i,0,0])
only_x[0]= i
temp_min_prox_y=abs(get_point_plane_proximity(plane,np.asarray([i,cube_dim,0])))
# print("pseudo distance y: {0}, point: [{1},{2},0]".format(temp_min_prox_y,i,cube_dim))
if temp_min_prox_y < min_prox_y:
min_prox_y = temp_min_prox_y
corner_intersection_y = np.asarray([i,cube_dim,0])
only_x[1]= i
temp_min_prox_z=abs(get_point_plane_proximity(plane,np.asarray([i,0,cube_dim])))
#print("pseudo distance z: {0}, point: [{1},0,{2}]".format(temp_min_prox_z,i,cube_dim))
if temp_min_prox_z < min_prox_z:
min_prox_z = temp_min_prox_z
corner_intersection_z = np.asarray([i,0,cube_dim])
only_x[2]= i
temp_min_prox_yz=abs(get_point_plane_proximity(plane,np.asarray([i,cube_dim,cube_dim])))
#print("pseudo distance z: {0}, point: [{1},{2},{2}]".format(temp_min_prox_yz,i,cube_dim))
if temp_min_prox_yz < min_prox_yz:
min_prox_yz = temp_min_prox_yz
corner_intersection_yz = np.asarray([i,cube_dim,cube_dim])
only_x[3]= i
corners_list.append(corner_intersection_x)
corners_list.append(corner_intersection_y)
corners_list.append(corner_intersection_z)
corners_list.append(corner_intersection_yz)
corners_list.append(only_x.min())
corners_list.append(only_x.max())
return corners_list
def get_points_intersection(plane,min_x,max_x,data_cube,shape=128):
fill = data_cube[0:5,0:5,0:5].mean() #this can be a parameter
extended_data_cube = np.ones([shape+2,shape,shape])*fill
extended_data_cube[1:shape+1,:,:] = data_cube
diag_image = np.zeros([shape,shape])
min_x_value = 999999
for i in range(shape):
for j in range(shape):
for k in range(int(min_x),int(max_x)+1):
current_value = abs(get_point_plane_proximity(plane,np.asarray([k,i,j])))
#print("current_value:{0}, val: [{1},{2},{3}]".format(current_value,k,i,j))
if current_value < min_x_value:
diag_image[i,j] = extended_data_cube[k,i,j]
min_x_value = current_value
min_x_value = 999999
return diag_image
The way it works is the following:
you create a normal vector:
for example [5,0,3]
normal1=create_normal_vector(5, 0,3) #this is only to normalize
then you create a point:
(my cube data shape is [128,128,128])
point = [64,64,64]
You calculate the plane equation parameters, [a,b,c,d] where ax+by+cz=d
plane1=get_plane_equation_parameters(normal1,point)
then to reduce the search space you can calculate the intersection of the plane with the cube:
corners1 = get_corner_interesections(plane1,128)
where corners1 = [intersection [x,0,0],intersection [x,128,0],intersection [x,0,128],intersection [x,128,128], min intersection [x,y,z], max intersection [x,y,z]]
With all these you can calculate the intersection between the cube and the plane:
image1 = get_points_intersection(plane1,corners1[-2],corners1[-1],data_cube)
Some examples:
normal is [1,0,0] point is [64,64,64]
normal is [5,1,0],[5,1,1],[5,0,1] point is [64,64,64]:
normal is [5,3,0],[5,3,3],[5,0,3] point is [64,64,64]:
normal is [5,-5,0],[5,-5,-5],[5,0,-5] point is [64,64,64]:
Thank you.
The other answers here do not appear to be very efficient with explicit loops over pixels or using scipy.interpolate.griddata, which is designed for unstructured input data. Here is an efficient (vectorized) and generic solution.
There is a pure numpy implementation (for nearest-neighbor "interpolation") and one for linear interpolation, which delegates the interpolation to scipy.ndimage.map_coordinates. (The latter function probably didn't exist in 2013, when this question was asked.)
import numpy as np
from scipy.ndimage import map_coordinates
def slice_datacube(cube, center, eXY, mXY, fill=np.nan, interp=True):
"""Get a 2D slice from a 3-D array.
Copyright: Han-Kwang Nienhuys, 2020.
License: any of CC-BY-SA, CC-BY, BSD, GPL, LGPL
Reference: https://stackoverflow.com/a/62733930/6228891
Parameters:
- cube: 3D array, assumed shape (nx, ny, nz).
- center: shape (3,) with coordinates of center.
can be float.
- eXY: unit vectors, shape (2, 3) - for X and Y axes of the slice.
(unit vectors must be orthogonal; normalization is optional).
- mXY: size tuple of output array (mX, mY) - int.
- fill: value to use for out-of-range points.
- interp: whether to interpolate (rather than using 'nearest')
Return:
- slice: array, shape (mX, mY).
"""
center = np.array(center, dtype=float)
assert center.shape == (3,)
eXY = np.array(eXY)/np.linalg.norm(eXY, axis=1)[:, np.newaxis]
if not np.isclose(eXY[0] # eXY[1], 0, atol=1e-6):
raise ValueError(f'eX and eY not orthogonal.')
# R: rotation matrix: data_coords = center + R # slice_coords
eZ = np.cross(eXY[0], eXY[1])
R = np.array([eXY[0], eXY[1], eZ], dtype=np.float32).T
# setup slice points P with coordinates (X, Y, 0)
mX, mY = int(mXY[0]), int(mXY[1])
Xs = np.arange(0.5-mX/2, 0.5+mX/2)
Ys = np.arange(0.5-mY/2, 0.5+mY/2)
PP = np.zeros((3, mX, mY), dtype=np.float32)
PP[0, :, :] = Xs.reshape(mX, 1)
PP[1, :, :] = Ys.reshape(1, mY)
# Transform to data coordinates (x, y, z) - idx.shape == (3, mX, mY)
if interp:
idx = np.einsum('il,ljk->ijk', R, PP) + center.reshape(3, 1, 1)
slice = map_coordinates(cube, idx, order=1, mode='constant', cval=fill)
else:
idx = np.einsum('il,ljk->ijk', R, PP) + (0.5 + center.reshape(3, 1, 1))
idx = idx.astype(np.int16)
# Find out which coordinates are out of range - shape (mX, mY)
badpoints = np.any([
idx[0, :, :] < 0,
idx[0, :, :] >= cube.shape[0],
idx[1, :, :] < 0,
idx[1, :, :] >= cube.shape[1],
idx[2, :, :] < 0,
idx[2, :, :] >= cube.shape[2],
], axis=0)
idx[:, badpoints] = 0
slice = cube[idx[0], idx[1], idx[2]]
slice[badpoints] = fill
return slice
# Demonstration
nx, ny, nz = 50, 70, 100
cube = np.full((nx, ny, nz), np.float32(1))
cube[nx//4:nx*3//4, :, :] += 1
cube[:, ny//2:ny*3//4, :] += 3
cube[:, :, nz//4:nz//2] += 7
cube[nx//3-2:nx//3+2, ny//2-2:ny//2+2, :] = 0 # black dot
Rz, Rx = np.pi/6, np.pi/4 # rotation angles around z and x
cz, sz = np.cos(Rz), np.sin(Rz)
cx, sx = np.cos(Rx), np.sin(Rx)
Rmz = np.array([[cz, -sz, 0], [sz, cz, 0], [0, 0, 1]])
Rmx = np.array([[1, 0, 0], [0, cx, -sx], [0, sx, cx]])
eXY = (Rmx # Rmz).T[:2]
slice = slice_datacube(
cube,
center=[nx/3, ny/2, nz*0.7],
eXY=eXY,
mXY=[80, 90],
fill=np.nan,
interp=False
)
import matplotlib.pyplot as plt
plt.close('all')
plt.imshow(slice.T) # imshow expects shape (mY, mX)
plt.colorbar()
Output (for interp=False):
For this test case (50x70x100 datacube, 80x90 slice size) the run time is 376 µs (interp=False) and 550 µs (interp=True) on my laptop.

Python: Calculate Voronoi Tesselation from Scipy's Delaunay Triangulation in 3D

I have about 50,000 data points in 3D on which I have run scipy.spatial.Delaunay from the new scipy (I'm using 0.10) which gives me a very useful triangulation.
Based on: http://en.wikipedia.org/wiki/Delaunay_triangulation (section "Relationship with the Voronoi diagram")
...I was wondering if there is an easy way to get to the "dual graph" of this triangulation, which is the Voronoi Tesselation.
Any clues? My searching around on this seems to show no pre-built in scipy functions, which I find almost strange!
Thanks,
Edward
The adjacency information can be found in the neighbors attribute of the Delaunay object. Unfortunately, the code does not expose the circumcenters to the user at the moment, so you'll have to recompute those yourself.
Also, the Voronoi edges that extend to infinity are not directly obtained in this way. It's still probably possible, but needs some more thinking.
import numpy as np
from scipy.spatial import Delaunay
points = np.random.rand(30, 2)
tri = Delaunay(points)
p = tri.points[tri.vertices]
# Triangle vertices
A = p[:,0,:].T
B = p[:,1,:].T
C = p[:,2,:].T
# See http://en.wikipedia.org/wiki/Circumscribed_circle#Circumscribed_circles_of_triangles
# The following is just a direct transcription of the formula there
a = A - C
b = B - C
def dot2(u, v):
return u[0]*v[0] + u[1]*v[1]
def cross2(u, v, w):
"""u x (v x w)"""
return dot2(u, w)*v - dot2(u, v)*w
def ncross2(u, v):
"""|| u x v ||^2"""
return sq2(u)*sq2(v) - dot2(u, v)**2
def sq2(u):
return dot2(u, u)
cc = cross2(sq2(a) * b - sq2(b) * a, a, b) / (2*ncross2(a, b)) + C
# Grab the Voronoi edges
vc = cc[:,tri.neighbors]
vc[:,tri.neighbors == -1] = np.nan # edges at infinity, plotting those would need more work...
lines = []
lines.extend(zip(cc.T, vc[:,:,0].T))
lines.extend(zip(cc.T, vc[:,:,1].T))
lines.extend(zip(cc.T, vc[:,:,2].T))
# Plot it
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
lines = LineCollection(lines, edgecolor='k')
plt.hold(1)
plt.plot(points[:,0], points[:,1], '.')
plt.plot(cc[0], cc[1], '*')
plt.gca().add_collection(lines)
plt.axis('equal')
plt.xlim(-0.1, 1.1)
plt.ylim(-0.1, 1.1)
plt.show()
As I spent a considerable amount of time on this, I'd like to share my solution on how to get the Voronoi polygons instead of just the edges.
The code is at https://gist.github.com/letmaik/8803860 and extends on the solution of tauran.
First, I changed the code to give me vertices and (pairs of) indices (=edges) separately, as many calculations can be simplified when working on indices instead of point coordinates.
Then, in the voronoi_cell_lines method I determine which edges belong to which cells. For that I use the proposed solution of Alink from a related question. That is, for each edge find the two nearest input points (=cells) and create a mapping from that.
The last step is to create the actual polygons (see voronoi_polygons method). First, the outer cells which have dangling edges need to be closed. This is as simple as looking through all edges and checking which ones have only one neighboring edge. There can be either zero or two such edges. In case of two, I then connect these by introducing an additional edge.
Finally, the unordered edges in each cell need to be put into the right order to derive a polygon from them.
The usage is:
P = np.random.random((100,2))
fig = plt.figure(figsize=(4.5,4.5))
axes = plt.subplot(1,1,1)
plt.axis([-0.05,1.05,-0.05,1.05])
vertices, lineIndices = voronoi(P)
cells = voronoi_cell_lines(P, vertices, lineIndices)
polys = voronoi_polygons(cells)
for pIdx, polyIndices in polys.items():
poly = vertices[np.asarray(polyIndices)]
p = matplotlib.patches.Polygon(poly, facecolor=np.random.rand(3,1))
axes.add_patch(p)
X,Y = P[:,0],P[:,1]
plt.scatter(X, Y, marker='.', zorder=2)
plt.axis([-0.05,1.05,-0.05,1.05])
plt.show()
which outputs:
The code is probably not suitable for large numbers of input points and can be improved in some areas. Nevertheless, it may be helpful to others who have similar problems.
I came across the same problem and built a solution out of pv.'s answer and other code snippets I found across the web. The solution returns a complete Voronoi diagram, including the outer lines where no triangle neighbours are present.
#!/usr/bin/env python
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from scipy.spatial import Delaunay
def voronoi(P):
delauny = Delaunay(P)
triangles = delauny.points[delauny.vertices]
lines = []
# Triangle vertices
A = triangles[:, 0]
B = triangles[:, 1]
C = triangles[:, 2]
lines.extend(zip(A, B))
lines.extend(zip(B, C))
lines.extend(zip(C, A))
lines = matplotlib.collections.LineCollection(lines, color='r')
plt.gca().add_collection(lines)
circum_centers = np.array([triangle_csc(tri) for tri in triangles])
segments = []
for i, triangle in enumerate(triangles):
circum_center = circum_centers[i]
for j, neighbor in enumerate(delauny.neighbors[i]):
if neighbor != -1:
segments.append((circum_center, circum_centers[neighbor]))
else:
ps = triangle[(j+1)%3] - triangle[(j-1)%3]
ps = np.array((ps[1], -ps[0]))
middle = (triangle[(j+1)%3] + triangle[(j-1)%3]) * 0.5
di = middle - triangle[j]
ps /= np.linalg.norm(ps)
di /= np.linalg.norm(di)
if np.dot(di, ps) < 0.0:
ps *= -1000.0
else:
ps *= 1000.0
segments.append((circum_center, circum_center + ps))
return segments
def triangle_csc(pts):
rows, cols = pts.shape
A = np.bmat([[2 * np.dot(pts, pts.T), np.ones((rows, 1))],
[np.ones((1, rows)), np.zeros((1, 1))]])
b = np.hstack((np.sum(pts * pts, axis=1), np.ones((1))))
x = np.linalg.solve(A,b)
bary_coords = x[:-1]
return np.sum(pts * np.tile(bary_coords.reshape((pts.shape[0], 1)), (1, pts.shape[1])), axis=0)
if __name__ == '__main__':
P = np.random.random((300,2))
X,Y = P[:,0],P[:,1]
fig = plt.figure(figsize=(4.5,4.5))
axes = plt.subplot(1,1,1)
plt.scatter(X, Y, marker='.')
plt.axis([-0.05,1.05,-0.05,1.05])
segments = voronoi(P)
lines = matplotlib.collections.LineCollection(segments, color='k')
axes.add_collection(lines)
plt.axis([-0.05,1.05,-0.05,1.05])
plt.show()
Black lines = Voronoi diagram, Red lines = Delauny triangles
I do not know of a function to do this, but it does not seem like an overly complicated task.
The Voronoi graph is the junction of the circumcircles, as described in the wikipedia article.
So you could start with a function that finds the center of the circumcircles of a triangle, which is basic mathematics (http://en.wikipedia.org/wiki/Circumscribed_circle).
Then, just join centers of adjacent triangles.

Categories

Resources