How to deal with negative pixel values in images loaded using simpleITK - python

I have been working on DICOM CT scan images. I used simleITK to read the images to a numpy array. But the image pixel values are negative float values as shown in the image and the dtype of each pixel is float32. How to convert this pixel values to be able to train a TensorFlow 3D CNN model?
# Read the .nii image containing the volume with SimpleITK:
sitk_obj = sitk.ReadImage(filename)
# and access the numpy array:
image = sitk.GetArrayFromImage(sitk_obj)
negative pixel values
negative pixel values
The images read are of different shapes, how can I resize them to a specific constant image shapes?(as shown in below image)
different image shapes

If you use SimpleITK's RescaleIntensity function, you can rescale the pixel values to whatever range you require. Here's the docs for that function:
https://simpleitk.org/doxygen/latest/html/namespaceitk_1_1simple.html#af34ebbd0c41ae0d0a7a152ac1382bac6
To resize your images you can use SimpleITK's ResampleImageFilter. Here's the docs for that class:
https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1ResampleImageFilter.html
The following StackOverflow answer shows how to create a reference image that you resample your image onto:
https://stackoverflow.com/a/48065819/3712577
And this Github Gist how to resample several images to the same reference image:
https://gist.github.com/zivy/79d7ee0490faee1156c1277a78e4a4c4
Note that SimpleITK considers images as objects in physical space. So if the image origins, directions, and pixel spacings do not match up, then you will not get the result you expect.

Related

Getting the intensities of a certain region of an MR image

I have a 3D MR image as a NIfTI file (.nii.gz). I also have a 'mask' image as a NIfTI file, which is just a bunch of 0s and 1s. The 1s in this mask image represent the region of the 3D MR image I am interested in.
I want to retrieve the intensities of the pixels in the 3D MRI image which exist in the mask (i.e. are 1s in the mask image file). The only intensity feature I have found is sitk.MinimumMaximumImageFilter which isn't too useful since it uses the entire image (instead of a particular region), and also only gives the minimum and maximum of said image.
I don't think that the GetPixel() function helps me in this case either, since the 'pixel value' that it outputs is different to the intensity which I observe in the ITK-SNAP viewer. Is this correct?
What tool or feature could I use to help in this scenario?
use itk::BinaryImageToStatisticsLabelMapFilter
You might want to use itk::Statistics::MaskedImageToHistogramFilter, followed by min = histogram.Quantile(0, 0.0) and max = histogram.Quantile(0, 1.0). You probably need to use more bins than the example uses.

Image Operations with Python

I hope you're all doing well!
I'm new to Image Manipulation, and so I want to apologize right here for my simple question. I'm currently working on a problem that involves classifying an object called jet into two known categories. This object is made of sub-objects. My idea is to use this sub-objects to transform each jet in a pixel image, and then applying convolutional neural networks to find the patterns.
Here is an example of the pixel images:
jet's constituents pixel distribution
To standardize all the images, I want to find the two most intense pixels and make sure the axis connecting them is in the vertical direction, as well as make sure that the most intense pixel is at the top. It also would be good to impose that one of the sides (left or right) of the image contains the majority of the intensity and to normalize the intensity of the whole image to 1.
My question is: as I'm new to this kind of processing, I don't know if there is a library in Python that can handle these operations. Are you aware of any?
PS: the picture was taken from here:https://arxiv.org/abs/1407.5675
You can look into OpenCV library for Python:
https://docs.opencv.org/master/d6/d00/tutorial_py_root.html.
It supports a lot of image processing functions.
In your case, it probably would be easier to convert the image into a more suitable color space in which one axis stands for color intensity (e.g HSI, HSL, HSV) and trying to find indices of the maximum values along this axis (this should return the pixels with the highest intensity in the image).
Generally, in Python, we use PIL library for basic manipulations with images and OpenCV for advances ones.
But, if understand your task correctly, you can just think of an image as a multidimensional array and use numpy to manipulate it.
For example, if your image is stored in a variable of type numpy.array called img, you can find maximum value along the desired axis just by writing:
img.max(axis=0)
To normalize image you can use:
img /= img.max()
To find which image part is brighter, you can split an img array into desired parts and calculate their mean:
left = img[:, :int(img.shape[1]/2), :]
right = img[:, int(img.shape[1]/2):, :]
left_mean = left.mean()
right_mean = right.mean()

Extract feature vector from 2d image in numpy

I have a series of 2d images of two types, either a star or a pentagon. My aim is to classify all of these images respectively. I have 30 star images and 30 pentagon images. An example of each image is shown side by side here:
Before I apply the KNN classification algorithm, I need to extract a feature vector from all the images. The feature vectors must all be of the same size however the 2d images all vary in size. I have extracted read in my image and I get back a 2d array with zeros and ones.
image = pl.imread('imagepath.png')
My question is how do I process image in order produce a meaningful feature vector that contains enough information to allow me to do the classification. It has to be a single vector per image which I will use for training and testing.
If you want to use opencv then:
Resize images to a standard size:
import cv2
import numpy as np
src = cv2.imread("/path.jpg")
target_size = (64,64)
dst = cv2.resize(src, target_size)
Convert to a 1D vector:
dst = dst.reshape(target_size.shape[0] * target_size.shape[1])
Before you start coding, you have to decide whuch features are useful for this task:
The easiest way out is trying the approach in #Jordan's answer and converting the entire image to a feature. This could work because the classes are simple patterns, and is interesting if you are using KNN. If this does not work well, the following steps show how you should approach the problem.
The number of black pixels might not help, because the size of the
star and pentagon can vary.
The number of sharp corners is very likely to be useful.
The number of straight line segments might be useful, but this could
be unreliable because the shapes are hand-drawn.
Supposing you want to have a go at using the number of corners as a feature, you can refer to this page to learn how to extract corners.

Apply "reverse" colormap/lookup-table to generate grayscale image from RGB

We have a large dataset of thermal/infrared images. Due to some error, we received the data not as single-layer-TIFs or something, but the camera software already applied a colormap and I'm now looking at RGB jpg files.
I was able to "reconstruct" the used colormap from an image found online, and now I'm looking for an efficient way to revert the RGB images to grayscale to be able to work with it. Small problem, not all of the image RGB triplets may be represented in my reconstructed colormap, so right now my python script does something like that:
I = cv2.imread('image.jpg')
Iout = I[:,:,0] * 0
for i in range(0, I.shape[0]):
for j in range(0, I.shape[1]):
# calculate square difference between value and colormap and find idx
Iout[i,j]=idx
This works, but is painfully slow because of the for-loops.
Is there any way to use a lookup table with the RGB values (3D or something) which can be applied to the image as a whole? For values not in the colormap it should select the "closest" one, like I did with suqared differences above.

How to get the origin coordinate and pixel physical dimension from a PNG image

I've been searching all day for a way to get the physical dimension from a PNG image.I need to convert a PNG file into a numpy array, which is easy.
However, I cannot find a way to get the physical dimension of each pixel in the same image. Additionally, I need the origin of the image (i.e. the coordinate)
I understand that the physical dimension of a pixel is stored on a PNG image on the pHYs as part of the metadata. So I attempted to get all the metadata by following these steps: In Python, how do I read the exif data for an image?
However, the ._getifex() is not an actual method in the current version.

Categories

Resources