How to turn CT segmentation into 3d model in python - python

I have this numpy array of slices of segments of the liver CT(ground truths). I want to export them into a viewable format in tools like blender. The slices are white and black, 0-255. Anything other than liver is black, I want the liver to be viewed in 3d.
The slices are in top view. I used this code in kaggle to view them but just in jupyter https://www.kaggle.com/akh64bit/full-preprocessing-tutorial/data. It can be any way to visualize them.

You may try transform your arrays to DICOM format as mentioned before in stackoverflow: Create pydicom file from numpy array
Than you can easily visualize DICOM images in various platforms!

For new folks stumbling upon this question that are looking to convert pixels / voxels to an STL file or files, this Python workflow has worked for me:
Load stack of images as a 3D NumPy array using imageio.imread().
Segment the foreground from the background using one of the many segmentation algorithms from the scikit-image submodule skimage.segmentation, creating a 3D binary image.
Use the marching cubes algorithm from the scikit-image submodule skimage.measure to convert the voxels of interest to a list of faces defined by vertices on the surface of the volume.
Use numpy-stl to create an stl.Mesh object from the list of faces and vertices (as done in this example) then save the mesh with stl.Mesh.save().
As a bonus, you can use the Python package for the Open3D library to open & view multiple STL files!

Related

Python: Project 2D features onto voxels

My goal is to project feature vectors I have on an image pixel level onto voxels (via ray casting/marching). The ideal output would be to cast a ray and get the first intersected voxel as output. I have the camera intrinsics, extrinsics and the voxels. So I should have everything that is needed to do it. Currently my voxels are in a sparse format, i.e. an array of coordinates and feature vectors associated with each coordinate.
But unfortunately I couldn't find a simple way to do it. Is there some performant python library, that should work well for this usecase?
I know of open3d but it only seems to support my usecase for meshes.

I there any way in Python to handle 3D image processing like pasting one 3d image over another 3d images completely

I've been doing transparent pasting of image object over one another using PIL.
from PIL import Image
img1 = Image.open("bg")
img2 = Image.open("fg")
img1.paste(fg, (0,0), fg.convert("RGBA"))
img1.save("final.png", "PNG")
this script works fine for 2d images, I just want someone to point me in the right direction. I want to create characters in 3D, so I want a solution.
Thanks in advance. :)
If you have a 3d model of a human an another one of a hat, you can load both in the same 3D engine, adjust transformations (e.g. position, rotate and scale the hat so it looks right on the human) and render the unified scene as a single image.
Most 3D engines support this, it depends what your comfortable with.
While you could, in theory use OpenCV built from source with contributed modules such as viz (which uses VTK behind the scenes and includes samples), or even better, the ovis package with uses Ogre3D,
in practice there are so many layers in between I'd go straight for the engine rather than OpenCV with an integration.
For example with Ogre3D you could find python bindings directly, there's pyglet and many other 3D libraries.
I would warmly recommend trying Open3D though.
It's got a wealth of 3D computer vision tools availble but for your scenario in particular, its 3D renderer is great and easy to use.
To load a 3D model check out the Mesh file io tutorial and for rendering look at visualisation.
Note that Open3D ships with plenty of Python examples and even Jupyter notebooks(e.g. file io, visualisation) to get started.

3D visualization of .dicom files with ipyvolume

i'm trying to visualize a set of .dicom files using pydicom and ipyvolume.
I used pydicom to read files and then sorted them by their location and turned the slices into a 3D array. I could draw a 3D model of the data using ipyvolume.pylab.plot_isosurface() although I'm not sure if this is the right way of visualizing medical images (it's all solid pixels with the same opacity and color). I've also tried ipyvolume.pylab.volshow() but that did not work.
Is there a right way to visualize medical images with ipyvolume? or this is just not the right library for that?
DICOM file does not have 'voxel' data so you can't simply plot a dicom in 3D view. You should estimate voxel data using slices of a dicom series. after that, using a 3D modeling algorithm such as Marching Cubes you can extract final 3D model. Take a look at CTU.
I haven't used ipyvolume, but looking at the documentation it ought to be able to visualize DICOM image sets.
If you want to try another package, I use SimpleITK to load DICOM images and itkwidgets do volume visualization in a Jupyter notebook.
Here's a simple notebook that load a DICOM series and displays it:
import SimpleITK as sitk
import itkwidgets
# Get the DICOM file names in the current directory
names = sitk.ImageSeriesReader.GetGDCMSeriesFileNames('.')
# Read the DICOM series
reader = sitk.ImageSeriesReader()
reader.SetFileNames(names)
img = reader.Execute()
itkwidgets.view(img)
If the directory has more than one DICOM series in it, GetGDCMSeriesFileNames has a seriesID parameter you can give it to specify which series to load.

Is there a way to take a large group of 2D images and turn the into a 3D image?

I am currently working on a summer research project and we have generated 360 slices of a tumor. I now need to compile (if that's the right word) these images into one large 3D image. Is there a way to do this with either a python module or an outside source? I would prefer to use a free software if that is possible.
Perhaps via matplotlib, but anyway may require preprocessing I suppose:
https://www.youtube.com/watch?v=5E5mVVsrwZw
In your case, the z axis (3rd dimension) should be specified by your vector of images. Nonetheless, before proceeding, I suppose you would need to extract the shapes of the object you want to reconstruct. For instance, if i take any image of the many 2D you have, I expect to find RGB value for each pixel, but then, for instance if you want to plot a skull like in the video link, as I understand you would need to extract the borders of your object and from each of its frame (2D shape) and then plot their serie. But anyway, the processing may depend on the encoding of the information you have. Perhaps is sufficient to simply plot the series of images.
Some useful link I found:
https://www.researchgate.net/post/How_to_reconstruct_3D_images_from_two_or_four_2D_images
Python: 3D contour from a 2D image - pylab and contourf

Is there a method of aligning 2D arrays based on features much like Image Registration

I have two 2D arrays. One consist of reference data and the other measured data. While the matrices have the same shape the measured data will not be perfectly centered; meaning, the sample may not have been perfectly aligned with the detector. It could be rotated or translated. I would like to align the matrices based on the features much like image registration. I'm hoping someone can point me in the direction of a python package capable of this or let me know if opencv can do this for numpy arrays with arbitrary values that do not fit the mold of a typical .png or .jpg file.
I have aligned images using opencv image registration functions. I have attempted to convert my arrays to images using PIL with the intent being to use the image registration functions within opencv. If needed I can post my sample code but at this point I want to know if there is a package with functions capable of doing this.

Categories

Resources