SimpleITK - Coronal/sagittal views problems with size - python

I'm trying to extract the all three views (Axial, Sagittal and Coronal) from a CTA in DICOM format, using the SimpleItk library.
I can correctly read the series from a given directory:
...
import SimpleITK as sitk
...
reader = sitk.ImageSeriesReader()
dicom_names = reader.GetGDCMSeriesFileNames(input_dir)
reader.SetFileNames(dicom_names)
# Execute the reader
image = reader.Execute()
...
and then, using numpy arrays as stated in this questions, I'm able to extract and save the 3 views.
...
image_array = sitk.GetArrayFromImage(image)
...
for i in range(image_array.shape[0]):
output_file_name = axial_out_dir + 'axial_' + str(i) + '.png'
logging.debug('Saving image to ' + output_file_name)
imageio.imwrite(output_file_name, convert_img(image_array[i, :, :], axial_min, axial_max), format='png')
...
The other 2 are made by saving image_array[:, i, :] and image_array[:, :, i], while convert_img(..) is a function that only converts the data type, so it does not alter any shape.
However, the coronal and sagittal views are stretched, rotated and with wide black band (in some slice they are very wide).
Here's the screenshot from Slicer3d:
while this is the output of my code:
Axial
Sagittal
Coronal
Image shape is 512x512x1723, which result in axial pngs being 512x512 pixel, coronal and sagittal being 512x1723, thus this seems correct.
Should I try using PermuteAxes filter? The problem is that I was not able to find any documentation regarding its use in python (neither in other language due to 404 in documentation page)
There is also a way to improve the contrast? I have used the AdaptiveHistogramEqualization filter from simpleitk but it's way worse than Slicer3D visualization, other than being very slow.
Any help is appreciated, thank you!

When you convert your SimpleITK image to a NumPy array, all the pixel spacing information is lost (as the comments above suggest). If you do everything in SimpleITK, it retains that spacing information.
It's very easy to extract slices in X, Y and Z from an image in SimpleITK using python's array slicing:
import SimpleITK as sitk
# a blank test image
img = sitk.Image([100, 101, 102], sitk.sitkUInt8)
# non-uniform spacing, for illustration
img.SetSpacing([1.0, 1.1, 1.2])
# select the 42nd Z slice
zimg = img[:, :, 42]
#select the 0th X slice
ximg = img[0, :, :]
#select the 100th Y slice
yimg = img[:, 100, :]
#print the spacing to show it's retained
print(yimg.GetSpacing())

Answering my own question if someone need it.
Given the fact that I need to use the slices in a deep learning framework and for data augmentation, I need them to be resampled in a new spacing which is (1.0, 1.0, 1.0).
Solved it by using this function:
def resample_image(itk_image, out_spacing=(1.0, 1.0, 1.0)):
"""
Resample itk_image to new out_spacing
:param itk_image: the input image
:param out_spacing: the desired spacing
:return: the resampled image
"""
# get original spacing and size
original_spacing = itk_image.GetSpacing()
original_size = itk_image.GetSize()
# calculate new size
out_size = [
int(np.round(original_size[0] * (original_spacing[0] / out_spacing[0]))),
int(np.round(original_size[1] * (original_spacing[1] / out_spacing[1]))),
int(np.round(original_size[2] * (original_spacing[2] / out_spacing[2])))
]
# instantiate resample filter with properties and execute it
resample = sitk.ResampleImageFilter()
resample.SetOutputSpacing(out_spacing)
resample.SetSize(out_size)
resample.SetOutputDirection(itk_image.GetDirection())
resample.SetOutputOrigin(itk_image.GetOrigin())
resample.SetTransform(sitk.Transform())
resample.SetDefaultPixelValue(itk_image.GetPixelIDValue())
resample.SetInterpolator(sitk.sitkNearestNeighbor)
return resample.Execute(itk_image)
and then saving by using numpy arrays as stated in the original question.

I might be late, but you could use Torchio for this. I think a good solution for your case is to use the CLI tool that is installed with TorchIO:
$ tiohd your_image.nii.gz
ScalarImage(shape: (1, 512, 512, 1723); spacing: (0.50, 0.50, 1.00); orientation: RAS+; memory: 1.7 GiB; dtype: torch.ShortTensor)
$ torchio-transform your_image.nii.gz Resample one_iso.nii.gz
$ tiohd one_iso.nii.gz
ScalarImage(shape: (1, 256, 256, 1723); spacing: (1.00, 1.00, 1.00); orientation: RAS+; memory: 430.8 MiB; dtype: torch.ShortTensor)
This works because 1 mm is the default target resolution for the Resample transform.
You can also manipulate your images using the normal Python interface for TorchIO, of course.
Disclaimer: I'm the main developer of TorchIO.

Related

How Do I Change the Axis SimpleITK::ImageSeriesWriter Using?

The SimpleITK::ImageSeriesWriter default to slice given 3D volume along Z-axis and write slices of 2D images in XY view.
How do I change the axis so that the output is in XZ or YZ view?
In another word, if the default Z axis slices are in Axial view, how do I get the slices of Coronal and Sagittal view?
I tried the GitHub:FNNDSC/med2image's output xyz function.
But the images array are blindly written, so sometimes the X and Y are transposed, or one of the axis are reversed(flipped).
So I feel the need to write my own code to have full control.
def slice(dcm_folder, output_stem):
print('Reading Dicom directory:', path.abspath(dcm_folder))
reader = sitk.ImageSeriesReader()
dicom_names = reader.GetGDCMSeriesFileNames(dcm_folder)
reader.SetFileNames(dicom_names)
image = reader.Execute()
# cast the bit depth to PNG compatible "unsigned char"
image = sitk.Cast(sitk.RescaleIntensity(image), sitk.sitkUInt8)
size = image.GetSize()
print( "Image size:", size[0], size[1], size[2] )
# need Z filenames to write
series_filenames = list([output_stem + '-slice' + str(i).zfill(3) + '.png' for i in range(size[2])])
print('Writing {} image slices'.format(size[2]))
writer = sitk.ImageSeriesWriter()
writer.SetFileNames( series_filenames )
writer.Execute(image)
The code above will write out slices of Z axis successfully.
How do I modify the code so that I can get the slices of another 2 views?
You should be able to use the PermuteAxesImageFilter to swap the axes of your volume. Here's the documentation for that filter:
https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1PermuteAxesImageFilter.html
Or if you prefer a procedural interface (as I do), you can use the PermuteAxes function.
Well, I think you've fixed your issue. But what I've done is just importing a .mha file (or another extension supported by simple ITK) and converting it to a 3D array. Then what you need to do is just slice this array in different axis at a time. Take a look (python code):
import SimpleITK as sitk #importing package
path = '/current/folder/mha/file'
ct = sitk.ReadImage(path) #var_type is SimpleITK.Image
ndarray = sitk.GetArrayFromImage(ct) #converting from SimpleITK.Image to numpy ndarray
# Axial view:
plt.imshow(ndarray[100,:,:], cmap='gray') # plotting 100º image from axial view
#Coronal view:
plt.imshow(ndarray[:,100,:], cmap='gray') # plotting 100º image from coronal view
#Sagittal view:
plt.imshow(ndarray[:,:,100], cmap='gray') # plotting 100º image from sagittal view

SimpleITK Selectively Alter Pixels / Slicing

I've loaded a CT scan in SimpleITK. I'd like to do a few things that are pretty simple in NumPy, but haven't figured out how to do them in SimpleITK. I'd like to do them in SimpleITK for speed.
# NumPy: Changes all values of 100 to now become 500
nparr = nparr[nparr == 100] = 500
# SimpleITK:
???
SimpleITK image==100 will produce a binary image of the same dimension, where all intensities==100 are 1/True. This is desired. But I don't believe SimpleITK supports boolean indexing unfortunately. What's the most efficient way to accomplish this?
I've come up with this funky looking thing; but I was hoping to find the intended method / best practice means for doing this:
# Cast because data type returned is uint8 otherwise
difference = 500 - 100
offset = SimpleITK.Cast( image == 100), sitk.sitkInt32 ) * difference
image += offset
You can use the BinaryTheshold filter.
result = sitk.BinaryThreshold( image, 100, 101, 500, 0 )
That should only select pixels with intensity 100.
You are working using the SimpleITK image object to use it in a numpy style you need to use the methods GetArrayFromImage and GetImageFromArray to then get pixel access by converting the imagedata into a numpy array.
import SimpleITK as sitk
difference = 500 - 100
img_arr = sitk.GetArrayFromImage(image)
offset = img_arr[img_arr == 100] * difference
output = sitk.GetImageFromArray(image += offset)

How to perform logical operation and logical indexing using VIPS in Python?

I've had following codes that use Python and OpenCV. Briefly, I have a stack of image taken at different focal depth. The codes pick out pixels at every (x,y) position that has the largest Laplacian of Guassian response among all focal depth(z), thus creating a focus-stacked image. Function get_fmap creates a 2d array where each pixel will contains the number of the focal plane having the largest log response. In the following codes, lines that are commented out are my current VIPS implementation. They don't look compatible within the function definition because it's only partial solution.
# from gi.repository import Vips
def get_log_kernel(siz, std):
x = y = np.linspace(-siz, siz, 2*siz+1)
x, y = np.meshgrid(x, y)
arg = -(x**2 + y**2) / (2*std**2)
h = np.exp(arg)
h[h < sys.float_info.epsilon * h.max()] = 0
h = h/h.sum() if h.sum() != 0 else h
h1 = h*(x**2 + y**2 - 2*std**2) / (std**4)
return h1 - h1.mean()
def get_fmap(img): # img is a 3-d numpy array.
log_response = np.zeros_like(img[:, :, 0], dtype='single')
fmap = np.zeros_like(img[:, :, 0], dtype='uint8')
log_kernel = get_log_kernel(11, 2)
# kernel = get_log_kernel(11, 2)
# kernel = [list(row) for row in kernel]
# kernel = Vips.Image.new_from_array(kernel)
# img = Vips.new_from_file("testimg.tif")
for ii in range(img.shape[2]):
# img_filtered = img.conv(kernel)
img_filtered = cv2.filter2D(img[:, :, ii].astype('single'), -1, log_kernel)
index = img_filtered > log_response
log_response[index] = img_filtered[index]
fmap[index] = ii
return fmap
and then fmap will be used to pick out pixels from different focal planes to create a focus-stacked image
This is done on an extremely large image, and I feel VIPS might do a better job than OpenCV on this. However, the official documentation provides rather scant information on its Python binding. From the information I can find on the internet, I'm only able to make image convolution work ( which, in my case, is an order of magnitude faster than OpenCV.). I'm wondering how to implement this in VIPS, especially these lines?
log_response = np.zeros_like(img[:, :, 0], dtype = 'single')
index = img_filtered > log_response
log_response[index] = im_filtered[index]
fmap[index] = ii
log_response and fmap are initialized as 3D arrays in the question code, whereas the question text states that the output, fmap is a 2D array. So, I am assuming that log_response and fmap are to be initialized as 2D arrays with their shapes same as each image. Thus, the edits would be -
log_response = np.zeros_like(img[:,:,0], dtype='single')
fmap = np.zeros_like(img[:,:,0], dtype='uint8')
Now, back to the theme of the question, you are performing 2D filtering on each image one-by-one and getting the maximum index of filtered output across all stacked images. In case, you didn't know as per the documentation of cv2.filter2D, it could also be used on a multi-dimensional array giving us a multi-dimensional array as output. Then, getting the maximum index across all images is as simple as .argmax(2). Thus, the implementation must be extremely efficient and would be simply -
fmap = cv2.filter2D(img,-1,log_kernel).argmax(2)
After consulting the Python VIPS manual and some trial-and-error, I've come up with my own answer. My numpy and OpenCV implementation in question can be translated into VIPS like this:
import pyvips
img = []
for ii in range(num_z_levels):
img.append(pyvips.Image.new_from_file("testimg_z" + str(ii) + ".tif")
def get_fmap(img)
log_kernel = get_log_kernel(11,2) # get_log_kernel is my own function, which generates a 2-d numpy array.
log_kernel = [list(row) for row in log_kernel] # pyvips.Image.new_from_array takes 1-d list array.
log_kernel = pyvips.Image.new_from_array(log_kernel) # Turn the kernel into Vips array so it can be used by Vips.
log_response = img[0].conv(log_kernel)
for ii in range(len(img)):
img_filtered = img[ii+1].conv(log_kernel)
log_response = (img_filtered > log_response).ifthenelse(img_filtered, log_response)
fmap = (img_filtered > log_response).ifthenelse(ii+1, 0)
Logical indexing is achieved through ifthenelse method :
result_img = (test_condition).ifthenelse(value_if_true, value_if_false)
The syntax is rather flexible. The test condition can be a comparison between two images of the same size or between an image and a value, e.g. img1 > img2 or img > 5. Like wise, value_if_true can be a single value or a Vips image.

Better image normalization with numpy

I already achieved the goal described in the title but I was wondering if there was a more efficient (or generally better) way to do it. First of all let me introduce the problem.
I have a set of images of different sizes but with a width/height ratio less than (or equal) 2 (could be anything but let's say 2 for now), I want to normalize each one, meaning I want all of them to have the same size. Specifically I am going to do so like this:
Extract the max height above all images
Zoom the image so that each image reaches the max height keeping its ratio
Add a padding to the right with just white pixels until the image has a width/height ratio of 2
Keep in mind the images are represented as numpy matrices of grey scale values [0,255].
This is how I'm doing it now in Python:
max_height = numpy.max([len(obs) for obs in data if len(obs[0])/len(obs) <= 2])
for obs in data:
if len(obs[0])/len(obs) <= 2:
new_img = ndimage.zoom(obs, round(max_height/len(obs), 2), order=3)
missing_cols = max_height * 2 - len(new_img[0])
norm_img = []
for row in new_img:
norm_img.append(np.pad(row, (0, missing_cols), mode='constant', constant_values=255))
norm_img = np.resize(norm_img, (max_height, max_height*2))
There's a note about this code:
I'm rounding the zoom ratio because it makes the final height equal to max_height, I'm sure this is not the best approach but it's working (any suggestion is appreciated here). What I'd like to do is to expand the image keeping the ratio until it reaches a height equal to max_height. This is the only solution I found so far and it worked right away, the interpolation works pretty good.
So my final questions are:
Is there a better approach to achieve what explained above (image normalization) ? Do you think I could have done this differently ? Is there a common good practice I'm not following ?
Thanks in advance for your time.
Instead of ndimage.zoom you could use
scipy.misc.imresize. This
function allows you to specify the target size as a tuple, instead of by zoom
factor. Thus you won't have to call np.resize later to get the size exactly as
desired.
Note that scipy.misc.imresize calls
PIL.Image.resize
under the hood, so PIL (or Pillow) is a dependency.
Instead of using np.pad in a for-loop, you could allocate space for the desired array, norm_arr, first:
norm_arr = np.full((max_height, max_width), fill_value=255)
and then copy the resized image, new_arr into norm_arr:
nh, nw = new_arr.shape
norm_arr[:nh, :nw] = new_arr
For example,
from __future__ import division
import numpy as np
from scipy import misc
data = [np.linspace(255, 0, i*10).reshape(i,10)
for i in range(5, 100, 11)]
max_height = np.max([len(obs) for obs in data if len(obs[0])/len(obs) <= 2])
max_width = 2*max_height
result = []
for obs in data:
norm_arr = obs
h, w = obs.shape
if float(w)/h <= 2:
scale_factor = max_height/float(h)
target_size = (max_height, int(round(w*scale_factor)))
new_arr = misc.imresize(obs, target_size, interp='bicubic')
norm_arr = np.full((max_height, max_width), fill_value=255)
# check the shapes
# print(obs.shape, new_arr.shape, norm_arr.shape)
nh, nw = new_arr.shape
norm_arr[:nh, :nw] = new_arr
result.append(norm_arr)
# visually check the result
# misc.toimage(norm_arr).show()

python memory intensive script

After about 4 weeks of learning, experimenting, etc. I finally have a script which does what I want. It changes the perspective of images according to a certain projection matrix I have created. When I run the script for one image it works fine, however I would like to plot six images in one figure. When I try to do this I get a memory error.
All the images are 2448px in width and 2048 px in height each. My script:
files = {'cam1': 'c1.jpg',
'cam2': 'c2.jpg',
'cam3': 'c3.jpg',
'cam4': 'c4.jpg',
'cam5': 'c5.jpg',
'cam6': 'c6.jpg'}
fig, ax = plt.subplots()
for camname in files:
img = Image.open(files[camname])
gray_img = np.asarray(img.convert("L"))
img = np.asarray(img)
height, width, channels = img.shape
usedP = np.array(P[camname][:,[0,1,3]])
usedPinv = np.linalg.inv(usedP)
U, V = np.meshgrid(range(gray_img.shape[1]),
range(gray_img.shape[0]))
UV = np.vstack((U.flatten(),
V.flatten())).T
ones = np.ones((UV.shape[0],1))
UV = np.hstack((UV, ones))
# create UV_warped
UV_warped = usedPinv.dot(UV.T).T
# normalize vector by dividing by the third column (which should be 1)
normalize_vector = UV_warped[:,2].T
UV_warped = UV_warped/normalize_vector[:,None]
# masks
# pixels that are above the horizon and where the V-projection is therefor positive (X in argus): make 0, 0, 1
# pixels that are to far: make 0,0,1
masks = [UV_warped[:,0]<=0, UV_warped[:,0]>2000, UV_warped[:,1]>5000, UV_warped[:,1]<-5000] # above horizon: => [0,0,1]
total_mask = masks[0] | masks[1] | masks[2] | masks[3]
UV_warped[total_mask] = np.array([[0.0, 0.0, 1.0]])
# show plot
X_warped = UV_warped[:,0].reshape((height, width))
Y_warped = UV_warped[:,1].reshape((height, width))
gray_img = gray_img[:-1, :-1]
# add colors
rgb = img[:,:-1,:].reshape((-1,3)) / 255.0 # we have 1 less faces than grid cells
rgba = np.concatenate((rgb, np.ones((rgb.shape[0],1))), axis=1)
plotimg = ax.pcolormesh(X_warped, Y_warped, img.mean(-1)[:,:], cmap='Greys')
plotimg.set_array(None)
plotimg.set_edgecolor('none')
plotimg.set_facecolor(rgba)
ax.set_aspect('equal')
plt.show()
I have the feeling that numpy.meshgrid is quite memory intensive, but I'm not sure. Does anybody see where my memory gets eaten away rapidly? (BTW, I have a laptop with 12Gb of RAM, which is only used by other programs for a very small part)
You might want to profile your code with this library.
It will show you where your script is using memory.
There is a Stackoverflow question here about memory profilers. Also, I've used the trick in this answer in the past as a quick way to get an idea where in the code memory is going out of control. I just print the resource.getrusage() results all over the place. It's not clean, and it doesn't always work, but it's part of the standard library and it's easy to do.
I ordinarily profile with the profile and cProfile modules, as it makes testing individual sections of code fairly easy.
Python Profilers

Categories

Resources