I am trying to render an stl and save it as a png so it is a preview of the stl.
I have a project that needs to take layer images(png) and save it as 3d(stl). I converted every layer image(approximately 300-400 image) to a numpy array. Then from those numpy arrays I made vertices and faces. My stl files are faces(triangles) aligned on top of each other. So there is no actual body in the final image just faces aligned together.(Layers have no z value)
Here is a single layer from one of my models:
And here is the final model:
And now I am trying to generate an image like the 2nd picture above. So there will be a preview of the stl model. And I am trying to figure a way out to save the preview as a png file. But I couldn't figure out the plot libraries' camera movements.
I tried matplotlib but i suppose there is no camera adjustment setting so it generates a png on a side that I do not want. That's why I want a way to control the camera. There is blender's Python API but it seems pretty confusing.
Any ideas on how to do that in Python?
Related
I have a stl model which consists of a cube that was heated. I wrote a software which calculates the temperature distribution at every voxel in the cube, so it is essentially like a heat map where blue colored voxel is colder region and red colored voxel is warmer region on the cube.
The user provides how many voxels the cube stl model will be divided into and the code calculates the temperature at each voxel in the cube. This information is provided to me in an hd5f file which I can turn into a numpy array with 3 dimensions that indicates the temp at each x,y,z voxel. I want to display this 3d heatmap in the browser using Three Js so the user can rotate and section through the distribution to see how the temperature is distributed in the cube.
So I am trying to figure out how to display this hd5f file or convert the numpy array into something that can be displayed in the browser so that the user can see the temperature distribution on the cube. I found an example where they display a nrrd file in Three Js here. I thought maybe I could convert the numpy array I had into an nrrd file and also display it in the browser like the example using the following code:
import nrrd
import os
nrrd_file_name = os.path.dirname(__file__) +'\\test.nrrd'
nrrd.write(nrrd_file_name, temp_distribution_array, index_order='C') # array is size (61,61,61)
However, when I use the example code and replace their nrrd file with mine, I simply get a black screen with the console stating WebGL: INVALID_OPERATION: texSubImage3D: type FLOAT but ArrayBufferView not Float32Array. So I am unsure how to convert my 3d numpy array or my hd5f file into something that can be visualized in three js.
Hence, I was wondering if there is a better way to approach this or if anyone knows how to display hd5f files into the canvas or is there some other format I could use to display a 3d temperature heatmap in three js?
i'm trying to visualize a set of .dicom files using pydicom and ipyvolume.
I used pydicom to read files and then sorted them by their location and turned the slices into a 3D array. I could draw a 3D model of the data using ipyvolume.pylab.plot_isosurface() although I'm not sure if this is the right way of visualizing medical images (it's all solid pixels with the same opacity and color). I've also tried ipyvolume.pylab.volshow() but that did not work.
Is there a right way to visualize medical images with ipyvolume? or this is just not the right library for that?
DICOM file does not have 'voxel' data so you can't simply plot a dicom in 3D view. You should estimate voxel data using slices of a dicom series. after that, using a 3D modeling algorithm such as Marching Cubes you can extract final 3D model. Take a look at CTU.
I haven't used ipyvolume, but looking at the documentation it ought to be able to visualize DICOM image sets.
If you want to try another package, I use SimpleITK to load DICOM images and itkwidgets do volume visualization in a Jupyter notebook.
Here's a simple notebook that load a DICOM series and displays it:
import SimpleITK as sitk
import itkwidgets
# Get the DICOM file names in the current directory
names = sitk.ImageSeriesReader.GetGDCMSeriesFileNames('.')
# Read the DICOM series
reader = sitk.ImageSeriesReader()
reader.SetFileNames(names)
img = reader.Execute()
itkwidgets.view(img)
If the directory has more than one DICOM series in it, GetGDCMSeriesFileNames has a seriesID parameter you can give it to specify which series to load.
I have this numpy array of slices of segments of the liver CT(ground truths). I want to export them into a viewable format in tools like blender. The slices are white and black, 0-255. Anything other than liver is black, I want the liver to be viewed in 3d.
The slices are in top view. I used this code in kaggle to view them but just in jupyter https://www.kaggle.com/akh64bit/full-preprocessing-tutorial/data. It can be any way to visualize them.
You may try transform your arrays to DICOM format as mentioned before in stackoverflow: Create pydicom file from numpy array
Than you can easily visualize DICOM images in various platforms!
For new folks stumbling upon this question that are looking to convert pixels / voxels to an STL file or files, this Python workflow has worked for me:
Load stack of images as a 3D NumPy array using imageio.imread().
Segment the foreground from the background using one of the many segmentation algorithms from the scikit-image submodule skimage.segmentation, creating a 3D binary image.
Use the marching cubes algorithm from the scikit-image submodule skimage.measure to convert the voxels of interest to a list of faces defined by vertices on the surface of the volume.
Use numpy-stl to create an stl.Mesh object from the list of faces and vertices (as done in this example) then save the mesh with stl.Mesh.save().
As a bonus, you can use the Python package for the Open3D library to open & view multiple STL files!
How do I add circle-clipped image glyphs to my chart, without processing and uploading the images manually beforehand? I'm open to using other modules.
I want the end result to look something like this chart (from nytimes).
http://imgur.com/a/Nv6ta
My current understanding is that we can only load images directly from urls, which is not my desired outcome.
http://docs.bokeh.org/en/latest/docs/reference/models/glyphs/image_url.html
My current understanding is that we can only load images directly from urls
This is not correct, there is also ImageRGBA which allows for sending images as raw RGBA data, directly embedded in the Bokeh document. See, e.g., this gallery example:
http://docs.bokeh.org/en/latest/docs/gallery/image_rgba.html
So assuming that images is a Python list of 2D NumPy arrays of RGBA data for the (pre-cropped) images you want to display, then Bokeh could show them with:
p.image_rgba(image=images, x=....)
Of course, you have to convert the images to RGBA arrays yourself, and also crop them, so things may simply be easier or more ready made for this use-case with another tool.
I am using a QGIS plugin of a friend written in Python, which reclassifies the pixels of a raster by setting points, these points span a polygon and all the pixels within the polygon will be converted or reclassified. So far, it works more or less fine if I use a normal raster image from my hard disk in the format of .img or .tiff. By reclassifying the pixels, all the changes will be automatically saved in the image on the disk.
In a next step, I want to store all my raster images in a PostGIS database and manipulate them with that tool. Unfortunately, the tool cannot convert the pixels of the image if I load them into QGIS from the database.
The tool does not produce any error message. It starts loading and then nothing happens.
So the question is: Do I need to adapt the saving method of the plugin or is it generally impossible to manipulate raster images in QGIS which are stored in a database, or do I need special rights to access the raster images raster data type?