Python scipy getting the results of Delaunay triangulation - python

I am using scipy.spatial.Delaunay to triangulate a cloud of 3D points.
Each point is imported in an array with the structure R, G, B, X, Y, Z, only the values X, Y, Z are used to triangulate.
I calculate the barycentric coordinates of a list of "target" points.
For each target, I'd like to get:
the R, G, B of each point of the simplex in which target is located.
the barycentric coordinates of target
I cannot figure out how to get these data with my code.
print bcoords works fine (it generates a series of 4 values for each target):
[[ 2.89657287e-01 3.98169955e-01 1.24220635e-01 1.87952122e-01]
[ 3.24695465e-01 3.99228351e-01 8.91849061e-02 1.86891278e-01]
[ 2.89657287e-01 3.98169955e-01 1.24220635e-01 1.87952122e-01]
...,
[ -1.13763739e+03 1.32600196e+03 2.61787735e+02 -4.49152304e+02]
[ -1.13764457e+03 1.32600118e+03 2.61796224e+02 -4.49152837e+02]
[ -1.13765132e+03 1.32600045e+03 2.61804205e+02 -4.49153338e+02]]
However, print tetrahedra generates a list of numbers:
[915 915 915 ..., -1 -1 -1]
And this list is a list of indices
How can I replace it with a list of vertices?
code:
import csv
import numpy as np
import scipy.spatial
points = np.array([(int(R), int(G), int(B), float(X), float(Y), float(Z))
for R, G, B, X, Y, Z in csv.reader(open('XYZcolorlist_D65.csv'))])
# load X,Y,Z coordinates of 'points' in a np.array
# alternative points = pd.read_csv('XYZcolorlist_D65.csv')
tri = scipy.spatial.Delaunay(points[:,[3,4,5]])
# do the triangulation
indices = tri.simplices
# indices of vertices
vertices = points[indices]
# the vertices for each tetrahedron
targets = np.array([(float(X), float(Y), float(Z))
for name, X, Y, Z, crap in csv.reader(open('targets.csv'))])
# load the XYZ target values in a np.array
tetrahedra = tri.find_simplex(targets)
# find which tetrahedron each point belong to
X = tri.transform[tetrahedra,:3]
Y = targets - tri.transform[tetrahedra,3]
b = np.einsum('ijk,ik->ij', X, Y)
bcoords = np.c_[b, 1 - b.sum(axis=1)]
# find the barycentric coordinates of each point
#np.savetxt('coords.out', bcoords, fmt="%f")
print points.shape
print indices.shape
print vertices.shape
print tetrahedra.shape
print bcoords.shape
print bcoords
print tetrahedra
print indices
print vertices

Related

Faster alternative to itertools.product

Is there a faster way to get all coordinate combinations for x, y, z than itertools.product?. For an example I have the ranges: x: 10-310, y: 10-310 and z: 0-65.
EDIT
For example i have to put all cordinates in a polydata like here:
points1 = vtk.vtkPoints()
for coords in itertools.product(x1,y1,z1):
points1.InsertNextPoint(coords)
boxPolyData1 = vtk.vtkPolyData()
boxPolyData1.SetPoints(points1)
Use np.mgrid:
import numpy as np
x, y, z = np.mgrid[10:311, 10:311, 0:66]
I assumed you wanted the end points 310 and 65 inclusive.

Finding nearest xy-point in numpy array and second nearest with condition

My problem is like the problem in the thread Finding index of nearest point in numpy arrays of x and y coordinates, but it's extended:
For better visualization here's an image
(manipulated image, original from: by 112BKS - Eigenes WerkOriginal graph/Data from [.. ? ..], CC BY-SA 3.0, link to page):
On the one hand there is a array datafield. It consists of a numpy array with elements [value x y]. That are the thin blue lines with the numbers (they are the value). On the other hand there is the array orangeline in a numpy array with elements [x y].
What I want to do is to calculate the value of any elements in orangeline. I visualized one concrete element of orangeline with the green circle. The value for it can I interpolate with the two elements from datafield, visualized with the triangles. As result I get for the green circle a value between 225 and 230.
First step: Find for every element in orangeline the closest element in datafield.(In the example that is the pink triangle.)
Second step: Find for every element in 'orangeline' the closest element in datafield but with another value than the one from the first step. (In the example that is the brown triangle.)
Third step: Interpolate the value for every element in orangeline from those the two founded values and the distances to those elements.
First step can be solved with
mytree = scipy.spatial.cKDTree(datafield[:, 1:3])
dist1, indexes1 = mytree.query(orangeline)
But now I don't know how to filter the datafield for the second step. Is there a solution?
With help from #unutbu's comment I found this solution which works quite good also in those cases where the orangeline goes not through the field.
Here are the functions for the grid:
import matplotlib.mlab as mlab
import numpy as np
import scipy
def define_grid(rawdata):
xmin, xmax = np.amin(rawdata[:, 1]), np.amax(rawdata[:,1])
ymin, ymax = np.amin(rawdata[:, 2]), np.amax(rawdata[:,2])
x, y, z = rawdata[:, 1], rawdata[:, 2], rawdata[:, 0]
# Size of regular grid
ny, nx = (ymax - ymin), (xmax - xmin)
# Generate a regular grid to interpolate the data.
xi = np.linspace(xmin, xmax, nx)
yi = np.linspace(ymin, ymax, ny)
xi, yi = np.meshgrid(xi, yi)
# Interpolate using delaunay triangularization
zi = mlab.griddata(x,y,z,xi,yi)
return xi, yi, zi
def grid_as_array(xi,yi,zi):
xi_flat, yi_flat, zi_flat = np.ravel(xi), np.ravel(yi), np.ravel(zi)
# reduce arrays for faster calculation, take only every second element
xi_red, yi_red, zi_red = xi_flat[1::2], yi_flat[1::2], zi_flat[1::2]
# stack to array with elements [x y z], but there are z values that are 'nan'
xyz_with_nan = np.hstack((xi_red[:, np.newaxis], yi_red[:, np.newaxis],
zi_red[:, np.newaxis]))
# sort out those elements with 'nan'
xyz = xyz_with_nan[~np.isnan(xyz_with_nan).any(axis=1)]
return xyz
Another function to find the closest point from the grid for the values from orangeline:
def closest_node(points, datafield):
mytree = scipy.spatial.cKDTree(datafield)
dist, indexes = mytree.query(points)
return indexes
And now the code:
# use function to create from the raw data an interpolated datafield
xi, yi, zi = define_grid(datafield)
# rearrange those values to bring them in the form of an array with [x y z]
xyz = grid_as_array(xi, yi, zi)
# search closest values from grid for the points of the orangeline
# orangeline_xy is the array with elements [x y]
indexes = self.closest_node(orangeline_xy, xyz[:,0:2])
# take z values from the grid which we found before
orangeline_z = xyz[indexes, 2]
# add those z values to the points of the orangeline
orangeline_xyz = np.hstack((orangeline_xy,orangeline_z[:, np.newaxis]))

Correct usage of scipy.interpolate.RegularGridInterpolator

I am a little confused by the documentation for scipy.interpolate.RegularGridInterpolator.
Say for instance I have a function f: R^3 => R which is sampled on the vertices of the unit cube. I would like to interpolate so as to find values inside the cube.
import numpy as np
# Grid points / sample locations
X = np.array([[0,0,0], [0,0,1], [0,1,0], [0,1,1], [1,0,0], [1,0,1], [1,1,0], [1,1,1.]])
# Function values at the grid points
F = np.random.rand(8)
Now, RegularGridInterpolator takes a points argument, and a values argument.
points : tuple of ndarray of float, with shapes (m1, ), ..., (mn, )
The points defining the regular grid in n dimensions.
values : array_like, shape (m1, ..., mn, ...)
The data on the regular grid in n dimensions.
I interpret this as being able to call as such:
import scipy.interpolate as irp
rgi = irp.RegularGridInterpolator(X, F)
However, when I do so, I get the following error:
ValueError: There are 8 point arrays, but values has 1 dimensions
What am I misinterpreting in the docs?
Ok I feel silly when I answer my own question, but I found my mistake with help from the documentation of the original regulargrid lib:
https://github.com/JohannesBuchner/regulargrid
points should be a list of arrays that specifies how the points are spaced along each axis.
For example, to take the unit cube as above, I should set:
pts = ( np.array([0,1.]), )*3
or if I had data which was sampled at higher resolution along the last axis, I might set:
pts = ( np.array([0,1.]), np.array([0,1.]), np.array([0,0.5,1.]) )
Finally, values has to be of shape corresponding to the grid laid out implicitly by points. For example,
val_size = map(lambda q: q.shape[0], pts)
vals = np.zeros( val_size )
# make an arbitrary function to test:
func = lambda pt: (pt**2).sum()
# collect func's values at grid pts
for i in range(pts[0].shape[0]):
for j in range(pts[1].shape[0]):
for k in range(pts[2].shape[0]):
vals[i,j,k] = func(np.array([pts[0][i], pts[1][j], pts[2][k]]))
So finally,
rgi = irp.RegularGridInterpolator(points=pts, values=vals)
runs and performs as desired.
Your answer is nicer, and it's perfectly OK for you to accept it. I'm just adding this as an "alternate" way to script it.
import numpy as np
import scipy.interpolate as spint
RGI = spint.RegularGridInterpolator
x = np.linspace(0, 1, 3) # or 0.5*np.arange(3.) works too
# populate the 3D array of values (re-using x because lazy)
X, Y, Z = np.meshgrid(x, x, x, indexing='ij')
vals = np.sin(X) + np.cos(Y) + np.tan(Z)
# make the interpolator, (list of 1D axes, values at all points)
rgi = RGI(points=[x, x, x], values=vals) # can also be [x]*3 or (x,)*3
tst = (0.47, 0.49, 0.53)
print rgi(tst)
print np.sin(tst[0]) + np.cos(tst[1]) + np.tan(tst[2])
returns:
1.93765972087
1.92113615659

affine_transform xy coords from gda94

I'm trying to figure out how to convert a polygon whose coordinates are in Spatial Reference GDA94 (EPSG 4283) into xy coordinates (inverse affine transformation matrix).
The following code works:
import sys
import numpy as np
from osgeo import gdal
from osgeo import gdalconst
from shapely.geometry import Polygon
from shapely.geometry.polygon import LinearRing
# Bounding Box (via App) approximating part of QLD.
poly = Polygon(
LinearRing([
(137.8, -10.6),
(153.2, -10.6),
(153.2, -28.2),
(137.8, -28.2),
(137.8, -10.6)
])
)
# open raster data
ds = gdal.Open(sys.argv[1], gdalconst.GA_ReadOnly)
# get inverse transform matrix
(success, inv_geomatrix) = gdal.InvGeoTransform(ds.GetGeoTransform())
print inv_geomatrix
# build numpy rotation matrix
rot = np.matrix(([inv_geomatrix[1], inv_geomatrix[2]], [inv_geomatrix[4], inv_geomatrix[5]]))
print rot
# build numpy translation matrix
trans = np.matrix(([inv_geomatrix[0]], [inv_geomatrix[3]]))
print trans
# build affine transformation matrix
affm = np.matrix(([inv_geomatrix[1], inv_geomatrix[2], inv_geomatrix[0]],
[inv_geomatrix[4], inv_geomatrix[5], inv_geomatrix[3]],
[0, 0, 1]))
print affm
# poly is now a shapely geometry in gd94 coordinates -> convert to pixel
# - project poly onte raster data
xy = (rot * poly.exterior.xy + trans).T # need to transpose here to have a list of (x,y) pairs
print xy
Here's the output of the printed matrices:
(-2239.4999999999995, 20.0, 0.0, -199.49999999999986, 0.0, -20.0)
[[ 20. 0.]
[ 0. -20.]]
[[-2239.5]
[ -199.5]]
[[ 2.00000000e+01 0.00000000e+00 -2.23950000e+03]
[ 0.00000000e+00 -2.00000000e+01 -1.99500000e+02]
[ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
[[ 516.5 12.5]
[ 824.5 12.5]
[ 824.5 364.5]
[ 516.5 364.5]
[ 516.5 12.5]]
Is there a way to do this with scipy.ndimage's affine_transform function?
There are a few options. Not all spatial transformations are in linear space, so they can't all use an affine transform, so don't always rely on it. If you have two EPSG SRIDs, you can do a generic spatial transform with GDAL's OSR module. I wrote an example a while back, which can be adapted.
Otherwise, an affine transform has basic math:
/ a b xoff \
[x' y' 1] = [x y 1] | d e yoff |
\ 0 0 1 /
or
x' = a * x + b * y + xoff
y' = d * x + e * y + yoff
which can be implemented in Python over a list of points.
# original points
pts = [(137.8, -10.6),
(153.2, -10.6),
(153.2, -28.2),
(137.8, -28.2)]
# Interpret result from gdal.InvGeoTransform
# see http://www.gdal.org/classGDALDataset.html#af9593cc241e7d140f5f3c4798a43a668
xoff, a, b, yoff, d, e = inv_geomatrix
for x, y in pts:
xp = a * x + b * y + xoff
yp = d * x + e * y + yoff
print((xp, yp))
This is the same basic algorithm used in Shapely's shapely.affinity.affine_transform function.
from shapely.geometry import Polygon
from shapely.affinity import affine_transform
poly = Polygon(pts)
# rearrange the coefficients in the order expected by affine_transform
matrix = (a, b, d, e, xoff, yoff)
polyp = affine_transform(poly, matrix)
print(polyp.wkt)
Lastly, it's worth mentioning that the scipy.ndimage.interpolation.affine_transform function is intended for image or raster data, and not vector data.

How can an almost arbitrary plane in a 3D dataset be plotted by matplotlib?

There is an array containing 3D data of shape e.g. (64,64,64), how do you plot a plane given by a point and a normal (similar to hkl planes in crystallography), through this dataset?
Similar to what can be done in MayaVi by rotating a plane through the data.
The resulting plot will contain non-square planes in most cases.
Can those be done with matplotlib (some sort of non-rectangular patch)?
Edit: I almost solved this myself (see below) but still wonder how non-rectangular patches can be plotted in matplotlib...?
Edit: Due to discussions below I restated the question.
This is funny, a similar question I replied to just today. The way to go is: interpolation. You can use griddata from scipy.interpolate:
Griddata
This page features a very nice example, and the signature of the function is really close to your data.
You still have to somehow define the points on you plane for which you want to interpolate the data. I will have a look at this, my linear algebra lessons where a couple of years ago
I have the penultimate solution for this problem. Partially solved by using the second answer to Plot a plane based on a normal vector and a point in Matlab or matplotlib :
# coding: utf-8
import numpy as np
from matplotlib.pyplot import imshow,show
A=np.empty((64,64,64)) #This is the data array
def f(x,y):
return np.sin(x/(2*np.pi))+np.cos(y/(2*np.pi))
xx,yy= np.meshgrid(range(64), range(64))
for x in range(64):
A[:,:,x]=f(xx,yy)*np.cos(x/np.pi)
N=np.zeros((64,64))
"""This is the plane we cut from A.
It should be larger than 64, due to diagonal planes being larger.
Will be fixed."""
normal=np.array([-1,-1,1]) #Define cut plane here. Normal vector components restricted to integers
point=np.array([0,0,0])
d = -np.sum(point*normal)
def plane(x,y): # Get plane's z values
return (-normal[0]*x-normal[1]*y-d)/normal[2]
def getZZ(x,y): #Get z for all values x,y. If z>64 it's out of range
for i in x:
for j in y:
if plane(i,j)<64:
N[i,j]=A[i,j,plane(i,j)]
getZZ(range(64),range(64))
imshow(N, interpolation="Nearest")
show()
It's not the ultimate solution since the plot is not restricted to points having a z value, planes larger than 64 * 64 are not accounted for and the planes have to be defined at (0,0,0).
For the reduced requirements, I prepared a simple example
import numpy as np
import pylab as plt
data = np.arange((64**3))
data.resize((64,64,64))
def get_slice(volume, orientation, index):
orientation2slicefunc = {
"x" : lambda ar:ar[index,:,:],
"y" : lambda ar:ar[:,index,:],
"z" : lambda ar:ar[:,:,index]
}
return orientation2slicefunc[orientation](volume)
plt.subplot(221)
plt.imshow(get_slice(data, "x", 10), vmin=0, vmax=64**3)
plt.subplot(222)
plt.imshow(get_slice(data, "x", 39), vmin=0, vmax=64**3)
plt.subplot(223)
plt.imshow(get_slice(data, "y", 15), vmin=0, vmax=64**3)
plt.subplot(224)
plt.imshow(get_slice(data, "z", 25), vmin=0, vmax=64**3)
plt.show()
This leads to the following plot:
The main trick is dictionary mapping orienations to lambda-methods, which saves us from writing annoying if-then-else-blocks. Of course you can decide to give different names,
e.g., numbers, for the orientations.
Maybe this helps you.
Thorsten
P.S.: I didn't care about "IndexOutOfRange", for me it's o.k. to let this exception pop out since it is perfectly understandable in this context.
I had to do something similar for a MRI data enhancement:
Probably the code can be optimized but it works as it is.
My data is 3 dimension numpy array representing an MRI scanner. It has size [128,128,128] but the code can be modified to accept any dimensions. Also when the plane is outside the cube boundary you have to give the default values to the variable fill in the main function, in my case I choose: data_cube[0:5,0:5,0:5].mean()
def create_normal_vector(x, y,z):
normal = np.asarray([x,y,z])
normal = normal/np.sqrt(sum(normal**2))
return normal
def get_plane_equation_parameters(normal,point):
a,b,c = normal
d = np.dot(normal,point)
return a,b,c,d #ax+by+cz=d
def get_point_plane_proximity(plane,point):
#just aproximation
return np.dot(plane[0:-1],point) - plane[-1]
def get_corner_interesections(plane, cube_dim = 128): #to reduce the search space
#dimension is 128,128,128
corners_list = []
only_x = np.zeros(4)
min_prox_x = 9999
min_prox_y = 9999
min_prox_z = 9999
min_prox_yz = 9999
for i in range(cube_dim):
temp_min_prox_x=abs(get_point_plane_proximity(plane,np.asarray([i,0,0])))
# print("pseudo distance x: {0}, point: [{1},0,0]".format(temp_min_prox_x,i))
if temp_min_prox_x < min_prox_x:
min_prox_x = temp_min_prox_x
corner_intersection_x = np.asarray([i,0,0])
only_x[0]= i
temp_min_prox_y=abs(get_point_plane_proximity(plane,np.asarray([i,cube_dim,0])))
# print("pseudo distance y: {0}, point: [{1},{2},0]".format(temp_min_prox_y,i,cube_dim))
if temp_min_prox_y < min_prox_y:
min_prox_y = temp_min_prox_y
corner_intersection_y = np.asarray([i,cube_dim,0])
only_x[1]= i
temp_min_prox_z=abs(get_point_plane_proximity(plane,np.asarray([i,0,cube_dim])))
#print("pseudo distance z: {0}, point: [{1},0,{2}]".format(temp_min_prox_z,i,cube_dim))
if temp_min_prox_z < min_prox_z:
min_prox_z = temp_min_prox_z
corner_intersection_z = np.asarray([i,0,cube_dim])
only_x[2]= i
temp_min_prox_yz=abs(get_point_plane_proximity(plane,np.asarray([i,cube_dim,cube_dim])))
#print("pseudo distance z: {0}, point: [{1},{2},{2}]".format(temp_min_prox_yz,i,cube_dim))
if temp_min_prox_yz < min_prox_yz:
min_prox_yz = temp_min_prox_yz
corner_intersection_yz = np.asarray([i,cube_dim,cube_dim])
only_x[3]= i
corners_list.append(corner_intersection_x)
corners_list.append(corner_intersection_y)
corners_list.append(corner_intersection_z)
corners_list.append(corner_intersection_yz)
corners_list.append(only_x.min())
corners_list.append(only_x.max())
return corners_list
def get_points_intersection(plane,min_x,max_x,data_cube,shape=128):
fill = data_cube[0:5,0:5,0:5].mean() #this can be a parameter
extended_data_cube = np.ones([shape+2,shape,shape])*fill
extended_data_cube[1:shape+1,:,:] = data_cube
diag_image = np.zeros([shape,shape])
min_x_value = 999999
for i in range(shape):
for j in range(shape):
for k in range(int(min_x),int(max_x)+1):
current_value = abs(get_point_plane_proximity(plane,np.asarray([k,i,j])))
#print("current_value:{0}, val: [{1},{2},{3}]".format(current_value,k,i,j))
if current_value < min_x_value:
diag_image[i,j] = extended_data_cube[k,i,j]
min_x_value = current_value
min_x_value = 999999
return diag_image
The way it works is the following:
you create a normal vector:
for example [5,0,3]
normal1=create_normal_vector(5, 0,3) #this is only to normalize
then you create a point:
(my cube data shape is [128,128,128])
point = [64,64,64]
You calculate the plane equation parameters, [a,b,c,d] where ax+by+cz=d
plane1=get_plane_equation_parameters(normal1,point)
then to reduce the search space you can calculate the intersection of the plane with the cube:
corners1 = get_corner_interesections(plane1,128)
where corners1 = [intersection [x,0,0],intersection [x,128,0],intersection [x,0,128],intersection [x,128,128], min intersection [x,y,z], max intersection [x,y,z]]
With all these you can calculate the intersection between the cube and the plane:
image1 = get_points_intersection(plane1,corners1[-2],corners1[-1],data_cube)
Some examples:
normal is [1,0,0] point is [64,64,64]
normal is [5,1,0],[5,1,1],[5,0,1] point is [64,64,64]:
normal is [5,3,0],[5,3,3],[5,0,3] point is [64,64,64]:
normal is [5,-5,0],[5,-5,-5],[5,0,-5] point is [64,64,64]:
Thank you.
The other answers here do not appear to be very efficient with explicit loops over pixels or using scipy.interpolate.griddata, which is designed for unstructured input data. Here is an efficient (vectorized) and generic solution.
There is a pure numpy implementation (for nearest-neighbor "interpolation") and one for linear interpolation, which delegates the interpolation to scipy.ndimage.map_coordinates. (The latter function probably didn't exist in 2013, when this question was asked.)
import numpy as np
from scipy.ndimage import map_coordinates
def slice_datacube(cube, center, eXY, mXY, fill=np.nan, interp=True):
"""Get a 2D slice from a 3-D array.
Copyright: Han-Kwang Nienhuys, 2020.
License: any of CC-BY-SA, CC-BY, BSD, GPL, LGPL
Reference: https://stackoverflow.com/a/62733930/6228891
Parameters:
- cube: 3D array, assumed shape (nx, ny, nz).
- center: shape (3,) with coordinates of center.
can be float.
- eXY: unit vectors, shape (2, 3) - for X and Y axes of the slice.
(unit vectors must be orthogonal; normalization is optional).
- mXY: size tuple of output array (mX, mY) - int.
- fill: value to use for out-of-range points.
- interp: whether to interpolate (rather than using 'nearest')
Return:
- slice: array, shape (mX, mY).
"""
center = np.array(center, dtype=float)
assert center.shape == (3,)
eXY = np.array(eXY)/np.linalg.norm(eXY, axis=1)[:, np.newaxis]
if not np.isclose(eXY[0] # eXY[1], 0, atol=1e-6):
raise ValueError(f'eX and eY not orthogonal.')
# R: rotation matrix: data_coords = center + R # slice_coords
eZ = np.cross(eXY[0], eXY[1])
R = np.array([eXY[0], eXY[1], eZ], dtype=np.float32).T
# setup slice points P with coordinates (X, Y, 0)
mX, mY = int(mXY[0]), int(mXY[1])
Xs = np.arange(0.5-mX/2, 0.5+mX/2)
Ys = np.arange(0.5-mY/2, 0.5+mY/2)
PP = np.zeros((3, mX, mY), dtype=np.float32)
PP[0, :, :] = Xs.reshape(mX, 1)
PP[1, :, :] = Ys.reshape(1, mY)
# Transform to data coordinates (x, y, z) - idx.shape == (3, mX, mY)
if interp:
idx = np.einsum('il,ljk->ijk', R, PP) + center.reshape(3, 1, 1)
slice = map_coordinates(cube, idx, order=1, mode='constant', cval=fill)
else:
idx = np.einsum('il,ljk->ijk', R, PP) + (0.5 + center.reshape(3, 1, 1))
idx = idx.astype(np.int16)
# Find out which coordinates are out of range - shape (mX, mY)
badpoints = np.any([
idx[0, :, :] < 0,
idx[0, :, :] >= cube.shape[0],
idx[1, :, :] < 0,
idx[1, :, :] >= cube.shape[1],
idx[2, :, :] < 0,
idx[2, :, :] >= cube.shape[2],
], axis=0)
idx[:, badpoints] = 0
slice = cube[idx[0], idx[1], idx[2]]
slice[badpoints] = fill
return slice
# Demonstration
nx, ny, nz = 50, 70, 100
cube = np.full((nx, ny, nz), np.float32(1))
cube[nx//4:nx*3//4, :, :] += 1
cube[:, ny//2:ny*3//4, :] += 3
cube[:, :, nz//4:nz//2] += 7
cube[nx//3-2:nx//3+2, ny//2-2:ny//2+2, :] = 0 # black dot
Rz, Rx = np.pi/6, np.pi/4 # rotation angles around z and x
cz, sz = np.cos(Rz), np.sin(Rz)
cx, sx = np.cos(Rx), np.sin(Rx)
Rmz = np.array([[cz, -sz, 0], [sz, cz, 0], [0, 0, 1]])
Rmx = np.array([[1, 0, 0], [0, cx, -sx], [0, sx, cx]])
eXY = (Rmx # Rmz).T[:2]
slice = slice_datacube(
cube,
center=[nx/3, ny/2, nz*0.7],
eXY=eXY,
mXY=[80, 90],
fill=np.nan,
interp=False
)
import matplotlib.pyplot as plt
plt.close('all')
plt.imshow(slice.T) # imshow expects shape (mY, mX)
plt.colorbar()
Output (for interp=False):
For this test case (50x70x100 datacube, 80x90 slice size) the run time is 376 µs (interp=False) and 550 µs (interp=True) on my laptop.

Categories

Resources