I want to convert a .VTK ASCII polydata file into numpy array of just the coordinates of the points. I first tried this: https://stackoverflow.com/a/11894302 but it stores a (3,3) numpy array where each entry is actually the coordinates of THREE points that make that particular cell (in this case a triangle). However, I don't want the cells, I want the coordinates of each point (without repeatition). Next I tried this: https://stackoverflow.com/a/23359921/6619666 with some modifications. Here is my final code. Instead of numpy array, the values are being stored as a tuple but I am not sure if that tuple represents each point.
import sys
import numpy
import vtk
from vtk.util.numpy_support import vtk_to_numpy
reader = vtk.vtkPolyDataReader()
reader.SetFileName('Filename.vtk')
reader.ReadAllScalarsOn()
reader.ReadAllVectorsOn()
reader.Update()
nodes_vtk_array= reader.GetOutput().GetPoints().GetData()
print nodes_vtk_array
Please give suggestions.
You can use dataset_adapter from vtk.numpy_interface:
from vtk.numpy_interface import dataset_adapter as dsa
polydata = reader.GetOutput()
numpy_array_of_points = dsa.WrapDataObject(polydata).Points
From Kitware blog:
It is possible to access PointData, CellData, FieldData, Points
(subclasses of vtkPointSet only), Polygons (vtkPolyData only) this
way.
You can get the point coordinates from a polydata object like so:
polydata = reader.GetOutput()
points = polydata.GetPoints()
array = points.GetData()
numpy_nodes = vtk_to_numpy(array)
Related
How do I crop a EPSG:25832 raster stored in a numpy array with a wkt polygon?
You don't need to save it in disk, you can use it as temporary object. You can simply use "Mem" format :
from osgeo import gdal, gdal_array as gdarr
input_raster = "path/to/rgb.tif"
# or if the input is already a gdal raster object you can use that gdal object
input_raster=gdal.Open("path/to/rgb.tif")
input_kml = "path/to/cropline.kml" # or any other format
ds = gdal.Warp('',
input_raster,
format = 'Mem',
cutlineDSName = input_kml, # or any other file format
cutlineLayer = 'extent', # if cutline is a shapefile no need for this cutlineLayer
dstNodata = -9999) # select your no data value
#do stuff with ds object, it is your cropped dataset
#for example convert it to numpy array
npDs=ds.gdarr.DatasetReadAsArray(ds, 0, 0, ds.RasterXSize, ds.RasterYSize)
print(npDs.shape)
If you have a numpy array its more tricky because you need to know the geotransform params of that numpy array. My advice is to send back this numpy array to gdal object and then you can do that using gdal.warp()
The background to my problem is that I have a 3D structure saved in a .vtk file that I need to manipulate (dilate, erode, etc.). The following code snippets are designed to be run sequentially, i.e. if you run them one after the other, there should be no problems (apart from those I mention!).
I'm very new to VTK, so apologies for any very basic mistakes!
Problem
My problem stems from a problem with SimpleITK, wherein it is unable to read UnstructuredGrid or PolyData:
In [1]: import SimpleITK as sitk
In [2]: img_vtk = sitk.ReadImage(file_vtk)
Traceback (most recent call last):
File "<ipython-input-52-435ce999db50>", line 1, in <module>
img_vtk = sitk.ReadImage(file_vtk)
File "/usr/local/lib/python3.5/dist-packages/SimpleITK/SimpleITK.py", line 8614, in ReadImage
return _SimpleITK.ReadImage(*args)
RuntimeError: Exception thrown in SimpleITK ReadImage: /tmp/SimpleITK/Code/IO/src/sitkImageReaderBase.cxx:97:
sitk::ERROR: Unable to determine ImageIO reader for "/data/ROMPA_MRIandSeg/09S/Analysis/1_model/clip_dilate.vtk"
SimpleITK can, however, read StructuredGrid, so I tried to solve this by reading using VTK and converting.
import vtk
reader = vtk.vtkGenericDataObjectReader() # Using generic to allow it to match either Unstructured or PolyData
reader.SetFileName(file_vtk)
reader.Update()
output = reader.GetOutput()
However, from that point on, every method I've tried seems to have failed.
Proposed Solutions
Conversion to numpy, then conversion to sitk image
I attempted to convert it to a numpy array (), then interpolate a regular grid, with a dummy variable of 1 to specify the values on the structure.
from vtk.utils import numpy_support
import scipy.interpolate
import numpy as np
nparray = numpy_support.vtk_to_numpy(output.GetPointData().GetArray(0))
output_bounds = output.GetBounds()
x_grid = range(math.floor(output_bounds[0]),math.ceil(output_bounds[1]),1)
y_grid = range(math.floor(output_bounds[2]),math.ceil(output_bounds[3]),1)
z_grid = range(math.floor(output_bounds[4]),math.ceil(output_bounds[5]),1)
grid = list()
for x in x_grid:
for y in y_grid:
for z in z_grid:
grid.append((x,y,z))
dummy = np.array([1 for i in range(nparray.shape[0])])
npgrid = scipy.interpolate.griddata(nparray,dummy,grid,fill_value=0)
npgrid.reshape(len(x_grid),len(y_grid),len(z_grid))
img = sitk.GetImageFromArray(npgrid)
sitk.WriteImage(img,file_out)
However, when I load this in ParaView, a bounding box is displayed for the output, but a contour of the output is empty.
Using ShepardMethod
I attempted to interpolate using the built-in ShepardMethod, after converting the UnstructuredGrid to PolyData (as I'd mostly seen ShepardMethod being applied to PolyData):
bounds = output.GetBounds()
spacings = [1.0,1.0,1.0] # arbitrary spacing
dimensions = [0,0,0]
for i,spacing in enumerate(spacings):
dimensions[i] = int(math.ceil((bounds[i*2 + 1]-bounds[i*2])/spacing))
vtkPoints = vtk.vtkPoints()
for i in range(0,nparray.shape[0]):
x=nparray[i,0]
y=nparray[i,1]
z=nparray[i,2]
p=[x,y,z]
vtkPoints.InsertNextPoint(p)
poly = vtk.vtkPolyData()
poly.SetPoints(vtkPoints)
shepard = vtk.vtkShepardMethod()
shepard.SetInputData(poly)
shepard.SetSampleDimensions(dimensions)
shepard.SetModelBounds(output.GetBounds())
shepard.Update()
shepard_data = shepard.GetOutput().GetPointData().GetArray(0)
shepard_numpy = numpy_support.vtk_to_numpy(shepard_data)
shepard_numpy = shepard_numpy.reshape(dimensions[0],dimensions[1],dimensions[2])
shepard_img = sitk.GetImageFromArray(shepard_numpy)
sitk.WriteImage(shepard_img,file_out)
As with the numpy effort above, this provided a bounding box in ParaView. Applying a contour provided a structure of two triangles, i.e. next to nothing seems to have been successfully written. Alternatively, I attempted to write the output directly using VTK.
shepard_data = shepard.GetOutput()
shepard_grid = vtk.vtkImageToStructuredGrid()
shepard_grid.SetInputData(shepard_data)
shepard_grid.Update()
writer = vtk.vtkStructuredGridWriter()
writer.SetFileName(file_out)
writer.SetInputData(shepard_grid.GetOutput())
writer.Write()
This produced the same output as before.
Using ProbeFilter
I tried the above using ProbeFilter instead (with both conversion to numpy and writing directly). Unfortunately, the output was the same as above.
mesh = vtk.vtkStructuredGrid()
mesh.SetDimensions(dimensions)
probe = vtk.vtkProbeFilter()
probe.SetInputData(mesh)
probe.SetSourceData(output)
probe.Update()
probe_out = probe.GetOutput()
writer = vtk.vtkStructuredGridWriter()
writer.SetFileName(file_out)
writer.SetInputData(probe.GetOutput())
writer.Write()
probe_data = probe.GetOutput().GetPointData().GetArray(0)
probe_numpy = numpy_support.vtk_to_numpy(probe_data)
probe_numpy = probe_numpy.reshape(dimensions[0],dimensions[1],dimensions[2])
probe_img = sitk.GetImageFromArray(probe_numpy)
sitk.WriteImage(probe_img,file_out)
However, this seemed to produce no viable output (vtkStructuredGridWriter produced an empty file, and probe_numpy was empty).
Changing ParaView output
My original data comes from a structuredGrid .vtk file, that I open using ParaView, and then clip to remove structures that aren't required in the mesh. Saving the output saves an unstructuredGrid, and I have been unable to figure out whether I can change that, and avoid this mess in the first place!
Just use "Resample With Dataset" filter in ParaView.
Open ParaView
Open a StructuredGrid file file with the geometry you want it to have
Open your UnstructuredGrid file
Add a "Resample with dataset" filter
Select structured data as source input
Apply
I have a sequence of about 100 PNG files containing 512x512 pre-segmented CAT scan data. I want to use vtk on Python to create a 3D model using marching cubes algorithm. The part that I don't know how to do is to load the sequence of PNG files and convert them to a single vtk pixel data object suitable for sending to the vtkDiscreteMarchingCubes algorithm.
I also think that I need to convert the pixel values of the PNG data because right now the data is in the alpha channel, so this needs to be converted into scalar data with values of zero and 1.
use vtkPNGreader and load in individual slices and then populate a vtkImageData which you can define the dimensions as and for each z-slice or image fill the image data form the output of the reader into your vtkImageData.
Rough pseudocode - not checked for bugs :)
import vtk
from vtk.util import numpy_support
pngfiles = glob.glob('*.png')
png_reader = vtk.vtkPNGReader()
png_reader.SetFileName(pngfiles[0])
x,y = png_reader.GetOutput().GetDimensions()
data_3d = np.zeros([x,y,len(pngfiles)])
for i,p in enumerate(png):
png_reader.SetFileName(pngfiles[0])
png_reader.Update()
img_data = png_reader.GetOutput()
data_3D[:,:,i] = numpy_support.vtk_to_numpy(img_data)
#save your 3D numpy array out.
data_3Dvtk = numpy_support.numpy_to_vtk(data_3D)
Just in case anyone stumbles on here looking for another way to do this only using vtk, you can use vtkImageAppend class.
def ReadImages(files):
reader = vtk.vtkPNGReader()
image3D = vtk.vtkImageAppend()
image3D.SetAppendAxis(2)
for f in files:
reader.SetFileName(f)
reader.Update()
t_img = vtk.vtkImageData()
t_img.DeepCopy(reader.GetOutput())
image3D.AddInputData(t_img)
image3D.Update()
return image3D.GetOutput()
for converting the data you can take a look at what the output of t_img.GetPointData().GetArray('PNGImage') gives and see if it is the expected value.
I am trying to splice a fits array based on the latitudes provided from the Header. However, I cannot seem to do so with my knowledge of Python and the documentation of astropy. The code I have is something like this:
from astropy.io import fits
import numpy as np
Wise1 = fits.open('Image1.fits')
im1 = Wise1[0].data
im1 = np.where(im1 > *latitude1, 0, im1)
newhdu = fits.PrimaryHDU(im1)
newhdulist = fits.HDUList([newhdu])
newhdulist.writeto('1b1_Bg_Removed_2.fits')
Here latitude1 would be a value in degrees, recognized after being called from the header. So there are two things I need to accomplish:
How to call the header to recognize Galactic Latitudes?
Splice the array in such a way that it only contains values for the range of latitudes, with everything else being 0.
I think by "splice" you mean "cut out" or "crop", based on the example you've shown.
astropy.nddata has a routine for world-coordinate-system-based (i.e., lat/lon or ra/dec) cutouts
However, in the simple case you're dealing with, you just need the coordinates of each pixel. Do this by making a WCS:
from astropy import wcs
w = wcs.WCS(Wise1[0].header)
xx,yy = np.indices(im.shape)
lon,lat = w.wcs_pix2world(xx,yy,0)
newim = im[lat > my_lowest_latitude]
But if you want to preserve the header information, you're much better off using the cutout tool, since you then do not have to manually manage this.
from astropy.nddata import Cutout2D
from astropy import coordinates
from astropy import units as u
# example coordinate - you'll have to figure one out that's in your map
center = coordinates.SkyCoord(mylon*u.deg, mylat*u.deg, frame='fk5')
# then make an array cutout
co = nddata.Cutout2D(im, center, size=[0.1,0.2]*u.arcmin, wcs=w)
# create a new FITS HDU
hdu = fits.PrimaryHDU(data=co.data, header=co.wcs.to_header())
# write to disk
hdu.writeto('cropped_file.fits')
An example use case is in the astropy documentation.
I've looked all over the place and am not finding a solution to this issue. I feel like it should be fairly straightforward, but we'll see.
I have a .FITS format data cube and I need to collapse it into a 2D FITS image. The data cube has two spacial dimensions and one spectral/velocity dimension.
Just looking for a simple python routine to load in the cube and flatten all these layers (i.e. integrate them along the spectral/velocity axis). Thanks for any help.
This tutorial on pyfits is a little old, but still basically correct. The key is that the output of opening a FITS cube with pyfits (or astropy.io.fits) is that you have a 3 dimensional numpy array.
import pyfits
# if you are using astropy then for this example
# from astropy.io import fits as pyfits
data_cube, header_data_cube = pyfits.getdata("data_cube.fits", 0, header=True)
data_cube.shape
# (Z, X, Y)
You then have to decided how to flatten/integrate cube along the Z axis, and there are plenty of resources out there to help you decide the right (hopefully based in some analysis framework) to do that.
OK, this seems to work:
import pyfits
import numpy as np
hdulist = pyfits.open(filename)
header = hdulist[0].header
data = hdulist[0].data
data = np.nan_to_num(data)
new_data = data[0]
for i in range(1,84): #this depends on number of layers or pages
new_data += data[i]
hdu = pyfits.PrimaryHDU(new_data)
hdu.writeto(new_filename)
One problem with this routine is that WCS coordinates (which are attached to the original data cube) are lost during this conversion.
This is a bit of an old question, but spectral-cube now provides a better solution for this.
Example, based on Teachey's answer:
from spectral_cube import SpectralCube
cube = SpectralCube.read(filename)
summed_image = cube.sum(axis=0)
summed_image.hdu.writeto(new_filename)