Coverting point cloud data (.ply) into a range image - python

I have a point cloud with XYZ data, I have read the .ply file using pyntcloud and converted it into a numpy array (553181,3)
I want to convert the point cloud e.g. into an 800x600x3 matrix that can be also treated as an RGB-image. For this, I scaled the XY-coordinates to [0,800] and [0,600] ranges.
so far ->
I have normalized the x and y coordinate to the range of (0,800) and (0,600)
I have created databins of size 800 and 600 and stored the respective x and y coordinate points
I don't know how to map these points to get a range image
I am new to python and would greatly appreciate the help and guidance

I don't understand how you could possibly treat a 3d point cloud in a 2d image. Any way you could use open3d to visualize your point cloud and store it in .xyzrgb file if you have RGB data, which seems you don't since you converted the file into a NumPy array with 3 columns that have to be x, y and z values. Therefore, you may need RGB values or give random values to the point cloud. A good way to do that is using pptk, which allows you to generate RGB colors on a point cloud and to render images (screenshots) on a built-in viewer (I assume that is what you need).
A simple workflow could be this:
import pptk
xyz = pptk.rand(100, 3) #generates a 3dimensions array or it could be a
#numpy array - your (553181,3) array
v = pptk.viewer(xyz)
rgb = pptk.rand(100, 3) #same here, must be same size as what you used
#or you can create a single random color of the shape you need
import random
r = np.full(shape=100, fill_value=random.randint(1,255))
g = np.full(shape=100, fill_value=random.randint(1,255))
b = np.full(shape=100, fill_value=random.randint(1,255))
rgb = np.dstack((r,g,b))
colors = rgb/256 #you are going to need to convert the values in
#0-1 interval
v.attributes(colors)
v.capture('screenshot.png')
Perhaps you would like to take a read to this massive-3d-point-clouds-visualization-in-python. It describes more or less what I put in the example I made, which is just a "short form" of that.

Related

Convert 3D points to 2D points in another coordinate system using a table of correspondance with Numpy

I have several pairs of images + cloud of 3D points that correspond to the same view. There are no rules for the change of coordinates. Everything is stored in a table.
Rows : image x coordinate
Columns : image y coordinate
Cell: 3D points (x,y,z) coordinates.
Practically, this is a NumPy array of dimensions (1920, 1080, 3).
The other way around, finding the coordinate of the 3D point when you have the coordinate in the image is pretty straightforward.
def image_to_xyz(self,image_points):
"""
Takes a point in the image
Return corresponding xyz coordinates
"""
xyz = self.xyz
image_points_x = image_points[:,0]
image_points_y = image_points[:,1]
xyz_points = xyz[image_points_x,image_points_y]
return xyz_points
For the other way around, it is easy, but dirty, to make a for loop and search for a corresponding point with a precision threshold. I tried it.
It's in python, I can't use Julia, so the loop took several minutes to complete. Way too much...
Do you have better suggestions?
Thanks

Visualize SimpleITK coordinates on Paraview

I am trying to just make an SimpleITK image where I want specific voxel values to be 1 and the rest to be 0. I am new to SimpleITK so I feel I am missing out on something.
Anyway, I have some indices that I have generated that I assign the voxel value of as 1. However, I want to be able to visualise how these samples are oriented with respect to each other in space. I have tried multiple ways from transforming an array full of zeroes with required indices as 1 to a NIFTI image however I am still not able to visualise it and see how these points look
Below is a basic code snippet I have tried
def WriteSampleToDisk():
"""Creates an empty image, assigns generated samples with voxel value 1 and writes it to disk.
returns image written to disk"""
img = sitk.Image(512, 512, 416, sitk.sitkInt16)
img.SetOrigin((0, 0, 0))
img.SetSpacing((1, 1, 1))
#Some code to get indices
for i in range(len(dimx)): #Same number of elements in every index dimension
img.SetPixel(dimz[i], dimy[i], dimx[i], 1)#Sitk convention takes z axis as the first axis
arr = sitk.GetArrayFromImage(img)
print(np.argwhere(arr == 1)) --> It's giving me the indices where I have Set the voxel value as 1
sitk.WriteImage(img, "image.nii")
return img
However when I try to view it on paraview even after setting the threshold, I still get nothing. What could be the reason for this? Is there a way to circumvent this problem?
Your voxel type is Int16 which has a range of -32768 to 32767. But you're setting your voxels to 1. Given the intensity range, that's not that different from 0 so it's pretty much going to be the same as 0, visually.
Try setting your on voxels to 32767. Also you might want to dilate your image after setting the voxels. A one voxel dot will be very small and difficult to see. Run the BinaryDilateFilter to grow the size of your dots.
UPDATE: ok, here's a example that I've written. It creates a UInt8 volume and sets random dots in it to 255. Then it creates a distance map volume from the dots volume.
import random
import SimpleITK as sitk
random.seed()
img = sitk.Image([512,512,512], sitk.sitkUInt8)
for i in range(1000):
x = random.randrange(512)
y = random.randrange(512)
z = random.randrange(512)
print(x,y,z)
img[x,y,z] = 255
sitk.WriteImage(img, "dots.nrrd")
dist = sitk.SignedMaurerDistanceMap(img)
sitk.WriteImage(dist, "dots.dist.nrrd")
sitk.Show(dist)
For the SignedMaurierDistanceMap, the voxels can be any value, as long as it's not the background value.

Indexing / interpolation

I am working on a image processing script that takes some pixel values and changes their location. my script returns me with pixel locations that arent integers, making it impossible to create an image array where I can simply plug in the piel value at their respective index.
I am looking to interp2d to do this.
for example I have a x,y,value matrix called 'scan'
scan=[[1.25, 1.25, 49],[4.65, 6.34, 154]...etc]]
scan[:,0]=Xs #array of x values
scan[:,1]=Ys #array of y values
scan[:,2]=Vs #array of pixel values
which I need to interpolate onto a uniform 10x10 grid to show as an image.
I am currently trying to use interp2d as
f=interpolate.interp2d(scan[:,0],scan[:,1],scan[:,2])
image=f(range(10),range(10))
I have many points, some are in and out of bounds of the uniform image i am trying to map to
Thanks,
Niall

Resizing a 3D image (and resampling)

I have 3D image of a brain (let's call it flash) and it's currently 263 x 256 x 185. I want to resize it to be the size of another image(call it whole_brain_bravo); 256 x 256 x 176, and (hopefully) use a lanczos interpolation to resample (Image.ANTIALIAS). My (failed) attempt:
from scipy import ndimage as nd
import nibabel as nib
import numpy as np
a = nib.load('flash.hdr') # nib is what I use to load the images
b = nib.load('whole_brain_bravo.hdr')
flash = a.get_data() # Access data as array (in this case memmap)
whole = b.get_data()
downed = nd.interpolation.zoom(flash, zoom=b.shape) # This obviously doesn't work
Have you guys ever done this sort of thing on a 3D image?
From the docstring for scipy.ndimage.interpolate.zoom:
"""
zoom : float or sequence, optional
The zoom factor along the axes. If a float, `zoom` is the same for each
axis. If a sequence, `zoom` should contain one value for each axis.
"""
What is the scale factor between the two images? Is it constant across all axes (i.e. are you scaling isometrically)? In that case zoom should be a single float value. Otherwise it should be a sequence of floats, one per axis.
For example, if the physical dimensions of whole and flash can be assumed to be equal, then you could do something like this:
dsfactor = [w/float(f) for w,f in zip(whole.shape, flash.shape)]
downed = nd.interpolation.zoom(flash, zoom=dsfactor)
According to the docs, the zoom argument is "The zoom factor along the axes". That's a little vague, but it sounds like they mean a scale factor, rather than the desired dimension.
Try this:
zoomFactors = [bi/float(ai) for ai, bi in zip(a, b)]
downed = nd.interpolation.zoom(flash, zoom=zoomFactors)
Not sure about choosing a filter - the docs only mention spline interpolations of various orders.

Interpolation over an irregular grid

So, I have three numpy arrays which store latitude, longitude, and some property value on a grid -- that is, I have LAT(y,x), LON(y,x), and, say temperature T(y,x), for some limits of x and y. The grid isn't necessarily regular -- in fact, it's tripolar.
I then want to interpolate these property (temperature) values onto a bunch of different lat/lon points (stored as lat1(t), lon1(t), for about 10,000 t...) which do not fall on the actual grid points. I've tried matplotlib.mlab.griddata, but that takes far too long (it's not really designed for what I'm doing, after all). I've also tried scipy.interpolate.interp2d, but I get a MemoryError (my grids are about 400x400).
Is there any sort of slick, preferably fast way of doing this? I can't help but think the answer is something obvious... Thanks!!
Try the combination of inverse-distance weighting and
scipy.spatial.KDTree
described in SO
inverse-distance-weighted-idw-interpolation-with-python.
Kd-trees
work nicely in 2d 3d ..., inverse-distance weighting is smooth and local,
and the k= number of nearest neighbours can be varied to tradeoff speed / accuracy.
There is a nice inverse distance example by Roger Veciana i Rovira along with some code using GDAL to write to geotiff if you're into that.
This is of coarse to a regular grid, but assuming you project the data first to a pixel grid with pyproj or something, all the while being careful what projection is used for your data.
A copy of his algorithm and example script:
from math import pow
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
def pointValue(x,y,power,smoothing,xv,yv,values):
nominator=0
denominator=0
for i in range(0,len(values)):
dist = sqrt((x-xv[i])*(x-xv[i])+(y-yv[i])*(y-yv[i])+smoothing*smoothing);
#If the point is really close to one of the data points, return the data point value to avoid singularities
if(dist<0.0000000001):
return values[i]
nominator=nominator+(values[i]/pow(dist,power))
denominator=denominator+(1/pow(dist,power))
#Return NODATA if the denominator is zero
if denominator > 0:
value = nominator/denominator
else:
value = -9999
return value
def invDist(xv,yv,values,xsize=100,ysize=100,power=2,smoothing=0):
valuesGrid = np.zeros((ysize,xsize))
for x in range(0,xsize):
for y in range(0,ysize):
valuesGrid[y][x] = pointValue(x,y,power,smoothing,xv,yv,values)
return valuesGrid
if __name__ == "__main__":
power=1
smoothing=20
#Creating some data, with each coodinate and the values stored in separated lists
xv = [10,60,40,70,10,50,20,70,30,60]
yv = [10,20,30,30,40,50,60,70,80,90]
values = [1,2,2,3,4,6,7,7,8,10]
#Creating the output grid (100x100, in the example)
ti = np.linspace(0, 100, 100)
XI, YI = np.meshgrid(ti, ti)
#Creating the interpolation function and populating the output matrix value
ZI = invDist(xv,yv,values,100,100,power,smoothing)
# Plotting the result
n = plt.normalize(0.0, 100.0)
plt.subplot(1, 1, 1)
plt.pcolor(XI, YI, ZI)
plt.scatter(xv, yv, 100, values)
plt.title('Inv dist interpolation - power: ' + str(power) + ' smoothing: ' + str(smoothing))
plt.xlim(0, 100)
plt.ylim(0, 100)
plt.colorbar()
plt.show()
There's a bunch of options here, which one is best will depend on your data...
However I don't know of an out-of-the-box solution for you
You say your input data is from tripolar data. There are three main cases for how this data could be structured.
Sampled from a 3d grid in tripolar space, projected back to 2d LAT, LON data.
Sampled from a 2d grid in tripolar space, projected into 2d LAT LON data.
Unstructured data in tripolar space projected into 2d LAT LON data
The easiest of these is 2. Instead of interpolating in LAT LON space, "just" transform your point back into the source space and interpolate there.
Another option that works for 1 and 2 is to search for the cells that maps from tripolar space to cover your sample point. (You can use a BSP or grid type structure to speed up this search) Pick one of the cells, and interpolate inside it.
Finally there's a heap of unstructured interpolation options .. but they tend to be slow.
A personal favourite of mine is to use a linear interpolation of the nearest N points, finding those N points can again be done with gridding or a BSP. Another good option is to Delauney triangulate the unstructured points and interpolate on the resulting triangular mesh.
Personally if my mesh was case 1, I'd use an unstructured strategy as I'd be worried about having to handle searching through cells with overlapping projections. Choosing the "right" cell would be difficult.
I suggest you taking a look at GRASS (an open source GIS package) interpolation features (http://grass.ibiblio.org/gdp/html_grass62/v.surf.bspline.html). It's not in python but you can reimplement it or interface with C code.
Am I right in thinking your data grids look something like this (red is the old data, blue is the new interpolated data)?
alt text http://www.geekops.co.uk/photos/0000-00-02%20%28Forum%20images%29/DataSeparation.png
This might be a slightly brute-force-ish approach, but what about rendering your existing data as a bitmap (opengl will do simple interpolation of colours for you with the right options configured and you could render the data as triangles which should be fairly fast). You could then sample pixels at the locations of the new points.
Alternatively, you could sort your first set of points spatially and then find the closest old points surrounding your new point and interpolate based on the distances to those points.
There is a FORTRAN library called BIVAR, which is very suitable for this problem. With a few modifications you can make it usable in python using f2py.
From the description:
BIVAR is a FORTRAN90 library which interpolates scattered bivariate data, by Hiroshi Akima.
BIVAR accepts a set of (X,Y) data points scattered in 2D, with associated Z data values, and is able to construct a smooth interpolation function Z(X,Y), which agrees with the given data, and can be evaluated at other points in the plane.

Categories

Resources