I am trying to write a function that extracts a 2D slice in a non-orthogonal plane from a 3D volume using numpy. The non-orthogonal slice obtained should be a rectangular two-dimensional array of shape (n, m), while the input volume should be a three-dimensional numpy array of shape (i, j, k).
So far I have tried to create a function that receives the volume, the plane normal and a point that belongs to the plane as inputs. I'm representing the plane normal and the point with numpy arrays of shape (3,). I am quite certain the function should follow these steps:
The function should first create a grid with the indices of the volume coordinates.
The function should define the plane using the dot product of the normal and the point.
The function should find the coordinates of the bounding box that contains the entire image slice. It is important to note that, except for specific edge cases where one of the normal coefficients is 0, most bounding boxes should end up with its corners having a variable amount of coordinates from outside the image.
The function should interpolate the slice points from the bounding box using the volume, as some of the coordinates contained by the slice may not be integers and thus will not actually exist in the image.
The non-orthogonal slice obtained from interpolation should then be returned.
I am stuck at step 3. I have gotten steps 4 and 5 to work using orthogonal planes, which are easy to obtain using numpy slicing, but I have been stuck for days trying to get my head around how to find the bounding box coordinates (even though I do not think this should be a hard problem). Any help would be greatly welcome!
Related
Recently i was struggling trying to take the pixel values of a 3D volume (np array) using specific space coordinate of a STL object.
The STL object is spatially overlapped with the 3D volume but the latter has no coordinate and so i don't know how to pick pixel values corresponding to the STL coordinates.
Any idea?
If the STL object is truly in the 3d volume's coordinate space, then you can simply STL's coordinate as an index to lookup the value from the 3d array. This lookup does nearest neighbor interpolation of the 3d image. For better looking results you'd want to do linear (or even cubic) interpolation of the nearby pixels.
In most 3d imaging tasks, those coordinate spaces do not align. So there is a transform to go from world space to 3d volume space. But if all you have is a 3d numpy array, then there is no transformation information.
Update:
To index into the 3d volume take the X, Y, Z coordinates of your point from the STL object and convert them into integer value I, J, K. Then lookup in the numpy array using I,J,K as indices: np_array[K][J][I]. I think you have to reverse the order of the indices because of the array ordering numpy uses.
When you way 3d array and the STL align in python, how are you showing that? The original DICOM or Nifti certainly have world coordinate transformations in the metadata.
I have to cluster a 3d array that looks like this
a=([[[1,2,3],[4,5,6],[7,8,9]],[[1,4,7],[2,5,9],[3,6,8]]])
Imagine that this array represents the coordinates of a triangle in a time series, so the first 2d array represents the coordinates of the vertices in the first frame, the second array represents the coordinates in the second frame and so on.
I need to cluster the position of this triangle in time, but the cluster algorithms of scikit -learn only work on 2d array. I have performed a reshape of the 3d array to obtain this
b=([[1,2,3,4,5,6,7,8,9],[1,4,7,2,5,9,3,6,8]])
but the performance of the cluster algorithms are poor (please note that the triangle is an example, I have to cluster the position of a much more complex figure so the dimensionality of the points in the 2d array is very high).
So I was wondering if there are other method to cluster a 3d array beside the reshape and dimensionality reduction techniques. I've read that converting the 3d array in a distance matrix could be a solution but I really don't know how to do this. If anyone has any kind of advice on how to do this or any other advice on how to solve this problem, I will really appreciate the help!
The clustering algorthm works with this format for a matrix: n_samples, n_features
So in your case your n_sample is your position in time and your n_features is your coordinate. I'm assuming you are trying to find the average location of your shapes across time. I would advise for this type of task to calculate the center point of your shape. Like this no matter the shape you have one point in the middle of the object to track across time. It would make a bit more sense than to try to track all corners on the object which I assume can rotate.
Hope it helps!
I have a 3D volume data stored in an 3-dimension array with background value 0 and volume value 1.
Now I want to get arbitrary section planes of this volume. I read the answer here:How can an almost arbitrary plane in a 3D dataset be plotted by matplotlib?
But it seems that the accepted answer was wrong, it generates mapped coordinates of xoy plane but not the slice coordinates.
So how to get the correct shape of the slice plane? Are there any methods of transforming the mapped shape to original shape?
Thanks!
The question might be outdated, but looks like the following function from scipy.ndimage might solve your issue.
What scipy.ndimage.interpolation.rotate does is it rotates the full 3d array around either of the 3 axes by a certain angle interpolating the stored values to the new "cells". It also resizes (extends) the new array accordingly, filling the new empty cells by a value you specify. After that you can take slice as you would normally do, say: array[:,sy//2,:].
So in short, here's a central cut around a diagonal parallel to the z-axis (for simplicity):
sz, sy, sx = array.shape
array[:,sy//2,:] # this is a cut in the x-z plane ...
# ... passing through the middle of the y-axis
# rotating by 45 degrees around the z axis ...
# ... `(2,1)` is `(x,y)` which defines the rotation plane
array_rotated = scipy.ndimage.interpolation.rotate(array, angle=45, axes=(2,1))
sz, sy, sx = array_rotated.shape
# now you'll notice that `sz` is the same ...
# ... but `sx` and `sy` have increased because the diagonal is longer
array_rotated[:,sy//2,:] # this slice is now in the new x'-z' plane ...
# ... but in terms of the original array ...
# ... it passes through the diagonal of x-y along z
You can actually think a bit further and extend this to an arbitrary slice by rotating it around different axes several times.
PS. If you feel it takes too much time, you can sacrifice the quality of interpolation by passing the order=0 (default is order=3). This will run faster.
I am working on a image processing script that takes some pixel values and changes their location. my script returns me with pixel locations that arent integers, making it impossible to create an image array where I can simply plug in the piel value at their respective index.
I am looking to interp2d to do this.
for example I have a x,y,value matrix called 'scan'
scan=[[1.25, 1.25, 49],[4.65, 6.34, 154]...etc]]
scan[:,0]=Xs #array of x values
scan[:,1]=Ys #array of y values
scan[:,2]=Vs #array of pixel values
which I need to interpolate onto a uniform 10x10 grid to show as an image.
I am currently trying to use interp2d as
f=interpolate.interp2d(scan[:,0],scan[:,1],scan[:,2])
image=f(range(10),range(10))
I have many points, some are in and out of bounds of the uniform image i am trying to map to
Thanks,
Niall
I am doing my best to replicate the algorithm described here in this paper for making an inpainting algorithm. The idea is to get the contour or edge points of the part of the image that needs to be inpainted. In order to find the most linear point in the region, the orthogonal normal vector is found. On page 6, a short description of the implementation is given.
In our implementation the contour
δΩ of the target region is modelled as a dense list of image point
locations. Given a point p ∈ δΩ, the normal direction np
is computed as follows: i) the positions of the
“control” points of δΩ are filtered via a bi-dimensional Gaussian
kernel and, ii) np is estimated as the unit vector orthogonal to
the line through the preceding and the successive points in the
list.
So it appears that I need to put all these points in a gaussian filter. How do I set up a bi-dimensional Gaussian filter when we have a single dimension or a list of points?
Lets say our contour is a box shape at points, then I create a 1 dimensional list of points: [1,1],[1,2],[1,3],[2,1],[2,3],[3,1],[3,2],[3,3]. Do I need to simply make a new 2d matrix table and put the points in and leave the middle point at [2,2] empty, then run a Gaussian filter on it? This doesn't seem very dense though.
I am trying to run this through python libraries.
a dense list of image points
is simply a line.
You are basically applying a gaussian filter to a black and white image where the line is black and background is white, from what I understand. I think by doing that, they approximate the curve model fitting.
Convolve all of the points in the 2D region surrounding the point and then overwrite the point with the result.
This will make any curve on the edge of the target region less sharp, lowering the noise in the calculation of the normal, which would be the vector orthogonal to the two points that surround the current one.