How Do I Change the Axis SimpleITK::ImageSeriesWriter Using? - python

The SimpleITK::ImageSeriesWriter default to slice given 3D volume along Z-axis and write slices of 2D images in XY view.
How do I change the axis so that the output is in XZ or YZ view?
In another word, if the default Z axis slices are in Axial view, how do I get the slices of Coronal and Sagittal view?
I tried the GitHub:FNNDSC/med2image's output xyz function.
But the images array are blindly written, so sometimes the X and Y are transposed, or one of the axis are reversed(flipped).
So I feel the need to write my own code to have full control.
def slice(dcm_folder, output_stem):
print('Reading Dicom directory:', path.abspath(dcm_folder))
reader = sitk.ImageSeriesReader()
dicom_names = reader.GetGDCMSeriesFileNames(dcm_folder)
reader.SetFileNames(dicom_names)
image = reader.Execute()
# cast the bit depth to PNG compatible "unsigned char"
image = sitk.Cast(sitk.RescaleIntensity(image), sitk.sitkUInt8)
size = image.GetSize()
print( "Image size:", size[0], size[1], size[2] )
# need Z filenames to write
series_filenames = list([output_stem + '-slice' + str(i).zfill(3) + '.png' for i in range(size[2])])
print('Writing {} image slices'.format(size[2]))
writer = sitk.ImageSeriesWriter()
writer.SetFileNames( series_filenames )
writer.Execute(image)
The code above will write out slices of Z axis successfully.
How do I modify the code so that I can get the slices of another 2 views?

You should be able to use the PermuteAxesImageFilter to swap the axes of your volume. Here's the documentation for that filter:
https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1PermuteAxesImageFilter.html
Or if you prefer a procedural interface (as I do), you can use the PermuteAxes function.

Well, I think you've fixed your issue. But what I've done is just importing a .mha file (or another extension supported by simple ITK) and converting it to a 3D array. Then what you need to do is just slice this array in different axis at a time. Take a look (python code):
import SimpleITK as sitk #importing package
path = '/current/folder/mha/file'
ct = sitk.ReadImage(path) #var_type is SimpleITK.Image
ndarray = sitk.GetArrayFromImage(ct) #converting from SimpleITK.Image to numpy ndarray
# Axial view:
plt.imshow(ndarray[100,:,:], cmap='gray') # plotting 100º image from axial view
#Coronal view:
plt.imshow(ndarray[:,100,:], cmap='gray') # plotting 100º image from coronal view
#Sagittal view:
plt.imshow(ndarray[:,:,100], cmap='gray') # plotting 100º image from sagittal view

Related

SimpleITK - Coronal/sagittal views problems with size

I'm trying to extract the all three views (Axial, Sagittal and Coronal) from a CTA in DICOM format, using the SimpleItk library.
I can correctly read the series from a given directory:
...
import SimpleITK as sitk
...
reader = sitk.ImageSeriesReader()
dicom_names = reader.GetGDCMSeriesFileNames(input_dir)
reader.SetFileNames(dicom_names)
# Execute the reader
image = reader.Execute()
...
and then, using numpy arrays as stated in this questions, I'm able to extract and save the 3 views.
...
image_array = sitk.GetArrayFromImage(image)
...
for i in range(image_array.shape[0]):
output_file_name = axial_out_dir + 'axial_' + str(i) + '.png'
logging.debug('Saving image to ' + output_file_name)
imageio.imwrite(output_file_name, convert_img(image_array[i, :, :], axial_min, axial_max), format='png')
...
The other 2 are made by saving image_array[:, i, :] and image_array[:, :, i], while convert_img(..) is a function that only converts the data type, so it does not alter any shape.
However, the coronal and sagittal views are stretched, rotated and with wide black band (in some slice they are very wide).
Here's the screenshot from Slicer3d:
while this is the output of my code:
Axial
Sagittal
Coronal
Image shape is 512x512x1723, which result in axial pngs being 512x512 pixel, coronal and sagittal being 512x1723, thus this seems correct.
Should I try using PermuteAxes filter? The problem is that I was not able to find any documentation regarding its use in python (neither in other language due to 404 in documentation page)
There is also a way to improve the contrast? I have used the AdaptiveHistogramEqualization filter from simpleitk but it's way worse than Slicer3D visualization, other than being very slow.
Any help is appreciated, thank you!
When you convert your SimpleITK image to a NumPy array, all the pixel spacing information is lost (as the comments above suggest). If you do everything in SimpleITK, it retains that spacing information.
It's very easy to extract slices in X, Y and Z from an image in SimpleITK using python's array slicing:
import SimpleITK as sitk
# a blank test image
img = sitk.Image([100, 101, 102], sitk.sitkUInt8)
# non-uniform spacing, for illustration
img.SetSpacing([1.0, 1.1, 1.2])
# select the 42nd Z slice
zimg = img[:, :, 42]
#select the 0th X slice
ximg = img[0, :, :]
#select the 100th Y slice
yimg = img[:, 100, :]
#print the spacing to show it's retained
print(yimg.GetSpacing())
Answering my own question if someone need it.
Given the fact that I need to use the slices in a deep learning framework and for data augmentation, I need them to be resampled in a new spacing which is (1.0, 1.0, 1.0).
Solved it by using this function:
def resample_image(itk_image, out_spacing=(1.0, 1.0, 1.0)):
"""
Resample itk_image to new out_spacing
:param itk_image: the input image
:param out_spacing: the desired spacing
:return: the resampled image
"""
# get original spacing and size
original_spacing = itk_image.GetSpacing()
original_size = itk_image.GetSize()
# calculate new size
out_size = [
int(np.round(original_size[0] * (original_spacing[0] / out_spacing[0]))),
int(np.round(original_size[1] * (original_spacing[1] / out_spacing[1]))),
int(np.round(original_size[2] * (original_spacing[2] / out_spacing[2])))
]
# instantiate resample filter with properties and execute it
resample = sitk.ResampleImageFilter()
resample.SetOutputSpacing(out_spacing)
resample.SetSize(out_size)
resample.SetOutputDirection(itk_image.GetDirection())
resample.SetOutputOrigin(itk_image.GetOrigin())
resample.SetTransform(sitk.Transform())
resample.SetDefaultPixelValue(itk_image.GetPixelIDValue())
resample.SetInterpolator(sitk.sitkNearestNeighbor)
return resample.Execute(itk_image)
and then saving by using numpy arrays as stated in the original question.
I might be late, but you could use Torchio for this. I think a good solution for your case is to use the CLI tool that is installed with TorchIO:
$ tiohd your_image.nii.gz
ScalarImage(shape: (1, 512, 512, 1723); spacing: (0.50, 0.50, 1.00); orientation: RAS+; memory: 1.7 GiB; dtype: torch.ShortTensor)
$ torchio-transform your_image.nii.gz Resample one_iso.nii.gz
$ tiohd one_iso.nii.gz
ScalarImage(shape: (1, 256, 256, 1723); spacing: (1.00, 1.00, 1.00); orientation: RAS+; memory: 430.8 MiB; dtype: torch.ShortTensor)
This works because 1 mm is the default target resolution for the Resample transform.
You can also manipulate your images using the normal Python interface for TorchIO, of course.
Disclaimer: I'm the main developer of TorchIO.

Plotting a new image without using the old one in matplotlib?

I'm new to python and matplotlib.
I have implemented the k means algorithm in order to compress and image to
clusters and then plotting the changed image.
my question is: I was not able to plot the new image without using
the old one as a base, I tried a few things but could not quite get the result I want. and it's bad programming if I pass the old image as argument when I can definitely not use it.
Can someone please help?
I tried to create a new ndarray but it did not work.
Here is my function:
def changePic(newPixelList, oldPixel, image_size):
index = 0
new_pixels = []
for pixel in newPixelList:
oldPixel[index] = pixel.classification
index+=1
l = oldPixel.reshape(image_size)
plt.imshow(l)
plt.grid(False)
plt.show()
As you can see I don't really use the oldPixel values, just its structure.
now I'll show you the type of oldPixel:
Here is my loadPic method where X.copy is the argument oldPixel:
def loadPic():
"""
Load pic to array
:return: copy of original X, new lisf of pixels, image size
"""
# data preperation (loading, normalizing, reshaping)
path = 'dog.jpeg'
A = imread(path)
A = A.astype(float) / 255.
img_size = A.shape
X = A.reshape(img_size[0] * img_size[1], img_size[2])
listOfPixel= []
for pixel in X:
listOfPixel.append(Pixel(pixel))
return X.copy(), listOfPixel,img_size
Try this:
def changePic(newPixelList, oldPixel, image_size, picture_num):
index = 0
new_pixels = []
for pixel in newPixelList:
oldPixel[index] = pixel.classification
index+=1
l = oldPixel.reshape(image_size)
plt.figure(picture_num)
plt.imshow(l)
plt.grid(False)
plt.show()
Every plot that you generate should have a different picture_num in order to have separate plots.

SimpleITK Resize images

I have a set o 3D volumes that I am reading with SimpleITK
import SimpleITK as sitk
for filename in filenames:
image = sitk.ReadImage(filename)
Each of the volumes has different size, spacing, origin and direction. This code yields different values for different images:
print(image.GetSize())
print(image.GetOrigin())
print(image.GetSpacing())
print(image.GetDirection())
My question is: how do I transform the images to have the same size and spacing so that they all have the same resolution and size when converted to numpy arrays. Something like:
import SimpleITK as sitk
for filename in filenames:
image = sitk.ReadImage(filename)
image = transform(image, fixed_size, fixed_spacing)
array = sitk.GetArrayFromImage(image)
The way to do this is to use the Resample function with fixed/arbitrary size and spacing. Below is a code snippet showing construction of this "reference_image" space:
reference_origin = np.zeros(dimension)
reference_direction = np.identity(dimension).flatten()
reference_size = [128]*dimension # Arbitrary sizes, smallest size that yields desired results.
reference_spacing = [ phys_sz/(sz-1) for sz,phys_sz in zip(reference_size, reference_physical_size) ]
reference_image = sitk.Image(reference_size, data[0].GetPixelIDValue())
reference_image.SetOrigin(reference_origin)
reference_image.SetSpacing(reference_spacing)
reference_image.SetDirection(reference_direction)
For a turnkey solution have a look at this Jupyter notebook which illustrates how to do data augmentation with variable sized images in SimpleITK (code above is from the notebook). You may find the other notebooks from the SimpleITK notebook repository of use too.
According to SimpleITK's documentation, the process of image resampling involves 4 steps:
Image - the image we resample, given in the coordinate system;
Resampling grid - a regular grid of points given in a coordinate system which will be mapped to the coordinate system;
Transformation - maps points from the coordinate system to coordinate system;
Interpolator - a method for obtaining the intensity values at arbitrary points in the coordinate system from the values of the points defined by the Image
The following snippet is for downsampling the image preserving its coordinate system properties:
def downsamplePatient(patient_CT, resize_factor):
original_CT = sitk.ReadImage(patient_CT,sitk.sitkInt32)
dimension = original_CT.GetDimension()
reference_physical_size = np.zeros(original_CT.GetDimension())
reference_physical_size[:] = [(sz-1)*spc if sz*spc>mx else mx for sz,spc,mx in zip(original_CT.GetSize(), original_CT.GetSpacing(), reference_physical_size)]
reference_origin = original_CT.GetOrigin()
reference_direction = original_CT.GetDirection()
reference_size = [round(sz/resize_factor) for sz in original_CT.GetSize()]
reference_spacing = [ phys_sz/(sz-1) for sz,phys_sz in zip(reference_size, reference_physical_size) ]
reference_image = sitk.Image(reference_size, original_CT.GetPixelIDValue())
reference_image.SetOrigin(reference_origin)
reference_image.SetSpacing(reference_spacing)
reference_image.SetDirection(reference_direction)
reference_center = np.array(reference_image.TransformContinuousIndexToPhysicalPoint(np.array(reference_image.GetSize())/2.0))
transform = sitk.AffineTransform(dimension)
transform.SetMatrix(original_CT.GetDirection())
transform.SetTranslation(np.array(original_CT.GetOrigin()) - reference_origin)
centering_transform = sitk.TranslationTransform(dimension)
img_center = np.array(original_CT.TransformContinuousIndexToPhysicalPoint(np.array(original_CT.GetSize())/2.0))
centering_transform.SetOffset(np.array(transform.GetInverse().TransformPoint(img_center) - reference_center))
centered_transform = sitk.Transform(transform)
centered_transform.AddTransform(centering_transform)
# sitk.Show(sitk.Resample(original_CT, reference_image, centered_transform, sitk.sitkLinear, 0.0))
return sitk.Resample(original_CT, reference_image, centered_transform, sitk.sitkLinear, 0.0)
Using the snippet above in a brain CT scan we get:

Matplotlib - how to rescale pixel intensities for RGB image

I am confused regarding how matplotlib handles fp32 pixel intensities. To my understanding, it rescales the values between max and min values of the image. However, when I try to view images originally in [0,1] by rescaling their pixel intensites to [-1,1] (by im*2-1) using imshow(), the image appears differently colored. How do I rescale so that images don't differ ?
EDIT : Please look at the image -
PS: I need to do this as part of a program that outputs those values in [-1,1]
Following is the code used for this:
img = np.float32(misc.face(gray=False))
fig,ax = plt.subplots(1,2)
img = img/255 # Convert to 0,1 range
print (np.max(img), np.min(img))
img0 = ax[0].imshow(img)
plt.colorbar(img0,ax=ax[0])
print (np.max(2*img-1), np.min(2*img-1))
img1 = ax[1].imshow(2*img-1) # Convert to -1,1 range
plt.colorbar(img1,ax=ax[1])
plt.show()
The max,min output is :
(1.0, 0.0)
(1.0, -1.0)
You are probably using matplotlib wrong here.
The normalization-step should work correctly, if it's active. The docs tell us, that is only active by default, if the input-image is of type float!
Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import misc
fig, ax = plt.subplots(2,2)
# This usage shows different colors because there is no normalization
# FIRST ROW
f = misc.face(gray=True)
print(f.dtype)
g = f*2 # just some operation to show the difference between usages
ax[0,0].imshow(f)
ax[0,1].imshow(g)
# This usage makes sure that the input-image is of type float
# -> automatic normalization is used!
# SECOND ROW
f = np.asarray(misc.face(gray=True), dtype=float) # TYPE!
print(f.dtype)
g = f*2 # just some operation to show the difference between usages
ax[1,0].imshow(f)
ax[1,1].imshow(g)
plt.show()
Output
uint8
float64
Analysis
The first row shows the wrong usage, because the input is of type int and therefore no normalization will be used.
The second row shows the correct usage!
EDIT:
sascha has correctly pointed out in the comments that rescaling is not applied for RGB images and inputs must be ensured to be in [0,1] range.

Load sequence of PNGs into vtkImageData for 3D volume render using python

I have a sequence of about 100 PNG files containing 512x512 pre-segmented CAT scan data. I want to use vtk on Python to create a 3D model using marching cubes algorithm. The part that I don't know how to do is to load the sequence of PNG files and convert them to a single vtk pixel data object suitable for sending to the vtkDiscreteMarchingCubes algorithm.
I also think that I need to convert the pixel values of the PNG data because right now the data is in the alpha channel, so this needs to be converted into scalar data with values of zero and 1.
use vtkPNGreader and load in individual slices and then populate a vtkImageData which you can define the dimensions as and for each z-slice or image fill the image data form the output of the reader into your vtkImageData.
Rough pseudocode - not checked for bugs :)
import vtk
from vtk.util import numpy_support
pngfiles = glob.glob('*.png')
png_reader = vtk.vtkPNGReader()
png_reader.SetFileName(pngfiles[0])
x,y = png_reader.GetOutput().GetDimensions()
data_3d = np.zeros([x,y,len(pngfiles)])
for i,p in enumerate(png):
png_reader.SetFileName(pngfiles[0])
png_reader.Update()
img_data = png_reader.GetOutput()
data_3D[:,:,i] = numpy_support.vtk_to_numpy(img_data)
#save your 3D numpy array out.
data_3Dvtk = numpy_support.numpy_to_vtk(data_3D)
Just in case anyone stumbles on here looking for another way to do this only using vtk, you can use vtkImageAppend class.
def ReadImages(files):
reader = vtk.vtkPNGReader()
image3D = vtk.vtkImageAppend()
image3D.SetAppendAxis(2)
for f in files:
reader.SetFileName(f)
reader.Update()
t_img = vtk.vtkImageData()
t_img.DeepCopy(reader.GetOutput())
image3D.AddInputData(t_img)
image3D.Update()
return image3D.GetOutput()
for converting the data you can take a look at what the output of t_img.GetPointData().GetArray('PNGImage') gives and see if it is the expected value.

Categories

Resources