I am working on a image processing script that takes some pixel values and changes their location. my script returns me with pixel locations that arent integers, making it impossible to create an image array where I can simply plug in the piel value at their respective index.
I am looking to interp2d to do this.
for example I have a x,y,value matrix called 'scan'
scan=[[1.25, 1.25, 49],[4.65, 6.34, 154]...etc]]
scan[:,0]=Xs #array of x values
scan[:,1]=Ys #array of y values
scan[:,2]=Vs #array of pixel values
which I need to interpolate onto a uniform 10x10 grid to show as an image.
I am currently trying to use interp2d as
f=interpolate.interp2d(scan[:,0],scan[:,1],scan[:,2])
image=f(range(10),range(10))
I have many points, some are in and out of bounds of the uniform image i am trying to map to
Thanks,
Niall
Related
I am trying to write a function that extracts a 2D slice in a non-orthogonal plane from a 3D volume using numpy. The non-orthogonal slice obtained should be a rectangular two-dimensional array of shape (n, m), while the input volume should be a three-dimensional numpy array of shape (i, j, k).
So far I have tried to create a function that receives the volume, the plane normal and a point that belongs to the plane as inputs. I'm representing the plane normal and the point with numpy arrays of shape (3,). I am quite certain the function should follow these steps:
The function should first create a grid with the indices of the volume coordinates.
The function should define the plane using the dot product of the normal and the point.
The function should find the coordinates of the bounding box that contains the entire image slice. It is important to note that, except for specific edge cases where one of the normal coefficients is 0, most bounding boxes should end up with its corners having a variable amount of coordinates from outside the image.
The function should interpolate the slice points from the bounding box using the volume, as some of the coordinates contained by the slice may not be integers and thus will not actually exist in the image.
The non-orthogonal slice obtained from interpolation should then be returned.
I am stuck at step 3. I have gotten steps 4 and 5 to work using orthogonal planes, which are easy to obtain using numpy slicing, but I have been stuck for days trying to get my head around how to find the bounding box coordinates (even though I do not think this should be a hard problem). Any help would be greatly welcome!
Recently i was struggling trying to take the pixel values of a 3D volume (np array) using specific space coordinate of a STL object.
The STL object is spatially overlapped with the 3D volume but the latter has no coordinate and so i don't know how to pick pixel values corresponding to the STL coordinates.
Any idea?
If the STL object is truly in the 3d volume's coordinate space, then you can simply STL's coordinate as an index to lookup the value from the 3d array. This lookup does nearest neighbor interpolation of the 3d image. For better looking results you'd want to do linear (or even cubic) interpolation of the nearby pixels.
In most 3d imaging tasks, those coordinate spaces do not align. So there is a transform to go from world space to 3d volume space. But if all you have is a 3d numpy array, then there is no transformation information.
Update:
To index into the 3d volume take the X, Y, Z coordinates of your point from the STL object and convert them into integer value I, J, K. Then lookup in the numpy array using I,J,K as indices: np_array[K][J][I]. I think you have to reverse the order of the indices because of the array ordering numpy uses.
When you way 3d array and the STL align in python, how are you showing that? The original DICOM or Nifti certainly have world coordinate transformations in the metadata.
I have several pairs of images + cloud of 3D points that correspond to the same view. There are no rules for the change of coordinates. Everything is stored in a table.
Rows : image x coordinate
Columns : image y coordinate
Cell: 3D points (x,y,z) coordinates.
Practically, this is a NumPy array of dimensions (1920, 1080, 3).
The other way around, finding the coordinate of the 3D point when you have the coordinate in the image is pretty straightforward.
def image_to_xyz(self,image_points):
"""
Takes a point in the image
Return corresponding xyz coordinates
"""
xyz = self.xyz
image_points_x = image_points[:,0]
image_points_y = image_points[:,1]
xyz_points = xyz[image_points_x,image_points_y]
return xyz_points
For the other way around, it is easy, but dirty, to make a for loop and search for a corresponding point with a precision threshold. I tried it.
It's in python, I can't use Julia, so the loop took several minutes to complete. Way too much...
Do you have better suggestions?
Thanks
I have a point cloud with XYZ data, I have read the .ply file using pyntcloud and converted it into a numpy array (553181,3)
I want to convert the point cloud e.g. into an 800x600x3 matrix that can be also treated as an RGB-image. For this, I scaled the XY-coordinates to [0,800] and [0,600] ranges.
so far ->
I have normalized the x and y coordinate to the range of (0,800) and (0,600)
I have created databins of size 800 and 600 and stored the respective x and y coordinate points
I don't know how to map these points to get a range image
I am new to python and would greatly appreciate the help and guidance
I don't understand how you could possibly treat a 3d point cloud in a 2d image. Any way you could use open3d to visualize your point cloud and store it in .xyzrgb file if you have RGB data, which seems you don't since you converted the file into a NumPy array with 3 columns that have to be x, y and z values. Therefore, you may need RGB values or give random values to the point cloud. A good way to do that is using pptk, which allows you to generate RGB colors on a point cloud and to render images (screenshots) on a built-in viewer (I assume that is what you need).
A simple workflow could be this:
import pptk
xyz = pptk.rand(100, 3) #generates a 3dimensions array or it could be a
#numpy array - your (553181,3) array
v = pptk.viewer(xyz)
rgb = pptk.rand(100, 3) #same here, must be same size as what you used
#or you can create a single random color of the shape you need
import random
r = np.full(shape=100, fill_value=random.randint(1,255))
g = np.full(shape=100, fill_value=random.randint(1,255))
b = np.full(shape=100, fill_value=random.randint(1,255))
rgb = np.dstack((r,g,b))
colors = rgb/256 #you are going to need to convert the values in
#0-1 interval
v.attributes(colors)
v.capture('screenshot.png')
Perhaps you would like to take a read to this massive-3d-point-clouds-visualization-in-python. It describes more or less what I put in the example I made, which is just a "short form" of that.
I'm using griddata() to interpolate my (irregular) 2-dimensional depth-measurements; x,y,depth. The method does a great job - but it interpolates over the entire grid where it can find to opposing points. I don't want that behaviour. I'd like to have an interpolation around the existing measurements, say with up to an extent of a certain radius.
Is it possible to tell numpy/scipy: don't interpolate if you're too far from an existing measurement? Resulting in a NODATA-value? ideal = griddata(.., .., .., radius=5.0)
edit example:
In the image below; black dots are the measurements. Shades of blue are the interpolated cells by numpy. The area marked in green is in fact part of the picture but is considered as NODATA by numpy (because there's no points in between). Now, the red areas, are interpolated, but I want to get rid of them. any ideas?
Ok cool. I don't think there is a built-in option for griddata() that does what you want, so you will need to write it yourself.
This comes down to calculating the distances between N input data points and M interpolation points. This is simple enough to do but if you have a lot of points it can be slow at ~O(M*N). But here's an example that calculates the distances to allN data points, for each interpolation point. If the number of data points withing the radius is at least neighbors, it keeps the value. Otherwise is writes the value of NODATA.
neighbors is 4 because griddata() will use biilinear interpolation which needs points bounding the interpolants in each dimension (2*2 = 4).
#invec - input points Nx2 numpy array
#mvec - interpolation points Mx2 numpy array
#just some random points for example
N=100
invec = 10*np.random.random([N,2])
M=50
mvec = 10*np.random.random([M,2])
# --- here you would put your griddata() call, returning interpolated_values
interpolated_values = np.zeros(M)
NODATA=np.nan
radius = 5.0
neighbors = 4
for m in range(M):
data_in_radius = np.sqrt(np.sum( (invec - mvec[m])**2, axis=1)) <= radius
if np.sum(data_in_radius) < neighbors :
interpolated_values[m] = NODATA
Edit:
Ok re-read and noticed the input is really 2D. Example modified.
Just as an additional comment, this could be greatly accelerated if you first build a coarse mapping from each point mvec[m] to a subset of the relevant data points.
The costliest step in the loop would change from
np.sqrt(np.sum( (invec - mvec[m])**2, axis=1))
to something like
np.sqrt(np.sum( (invec[subset[m]] - mvec[m])**2, axis=1))
There are plenty of ways to do this, for example using a Quadtree, hashing function, or 2D index. But whether this gives performance advantage depends on the application, how your data is structured, etc.