Getting the intensities of a certain region of an MR image - python

I have a 3D MR image as a NIfTI file (.nii.gz). I also have a 'mask' image as a NIfTI file, which is just a bunch of 0s and 1s. The 1s in this mask image represent the region of the 3D MR image I am interested in.
I want to retrieve the intensities of the pixels in the 3D MRI image which exist in the mask (i.e. are 1s in the mask image file). The only intensity feature I have found is sitk.MinimumMaximumImageFilter which isn't too useful since it uses the entire image (instead of a particular region), and also only gives the minimum and maximum of said image.
I don't think that the GetPixel() function helps me in this case either, since the 'pixel value' that it outputs is different to the intensity which I observe in the ITK-SNAP viewer. Is this correct?
What tool or feature could I use to help in this scenario?

use itk::BinaryImageToStatisticsLabelMapFilter

You might want to use itk::Statistics::MaskedImageToHistogramFilter, followed by min = histogram.Quantile(0, 0.0) and max = histogram.Quantile(0, 1.0). You probably need to use more bins than the example uses.

Related

Python - fill colors of an image with closest non zero color

I have an image that is created by ray casting a bunch of vectors on to a mesh with a uv map (in blender). There are not enough vectors to completely cover the image so I'd like a way to fill the rest of the image with the closest non zero color. I've been looking into some techniques with convolutions in numpy etc but can't really find what I need, attached is an example of an image I'm working with - png with RGBA.
[Edited to add]
Possibly a better description of what I am trying to do:
for each pixel that doesn't have a cast color (ie black) I need to find the closest pixel with a cast color based on the distance away, not based on how similar the RGB values are.

How to deal with negative pixel values in images loaded using simpleITK

I have been working on DICOM CT scan images. I used simleITK to read the images to a numpy array. But the image pixel values are negative float values as shown in the image and the dtype of each pixel is float32. How to convert this pixel values to be able to train a TensorFlow 3D CNN model?
# Read the .nii image containing the volume with SimpleITK:
sitk_obj = sitk.ReadImage(filename)
# and access the numpy array:
image = sitk.GetArrayFromImage(sitk_obj)
negative pixel values
negative pixel values
The images read are of different shapes, how can I resize them to a specific constant image shapes?(as shown in below image)
different image shapes
If you use SimpleITK's RescaleIntensity function, you can rescale the pixel values to whatever range you require. Here's the docs for that function:
https://simpleitk.org/doxygen/latest/html/namespaceitk_1_1simple.html#af34ebbd0c41ae0d0a7a152ac1382bac6
To resize your images you can use SimpleITK's ResampleImageFilter. Here's the docs for that class:
https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1ResampleImageFilter.html
The following StackOverflow answer shows how to create a reference image that you resample your image onto:
https://stackoverflow.com/a/48065819/3712577
And this Github Gist how to resample several images to the same reference image:
https://gist.github.com/zivy/79d7ee0490faee1156c1277a78e4a4c4
Note that SimpleITK considers images as objects in physical space. So if the image origins, directions, and pixel spacings do not match up, then you will not get the result you expect.

Image Operations with Python

I hope you're all doing well!
I'm new to Image Manipulation, and so I want to apologize right here for my simple question. I'm currently working on a problem that involves classifying an object called jet into two known categories. This object is made of sub-objects. My idea is to use this sub-objects to transform each jet in a pixel image, and then applying convolutional neural networks to find the patterns.
Here is an example of the pixel images:
jet's constituents pixel distribution
To standardize all the images, I want to find the two most intense pixels and make sure the axis connecting them is in the vertical direction, as well as make sure that the most intense pixel is at the top. It also would be good to impose that one of the sides (left or right) of the image contains the majority of the intensity and to normalize the intensity of the whole image to 1.
My question is: as I'm new to this kind of processing, I don't know if there is a library in Python that can handle these operations. Are you aware of any?
PS: the picture was taken from here:https://arxiv.org/abs/1407.5675
You can look into OpenCV library for Python:
https://docs.opencv.org/master/d6/d00/tutorial_py_root.html.
It supports a lot of image processing functions.
In your case, it probably would be easier to convert the image into a more suitable color space in which one axis stands for color intensity (e.g HSI, HSL, HSV) and trying to find indices of the maximum values along this axis (this should return the pixels with the highest intensity in the image).
Generally, in Python, we use PIL library for basic manipulations with images and OpenCV for advances ones.
But, if understand your task correctly, you can just think of an image as a multidimensional array and use numpy to manipulate it.
For example, if your image is stored in a variable of type numpy.array called img, you can find maximum value along the desired axis just by writing:
img.max(axis=0)
To normalize image you can use:
img /= img.max()
To find which image part is brighter, you can split an img array into desired parts and calculate their mean:
left = img[:, :int(img.shape[1]/2), :]
right = img[:, int(img.shape[1]/2):, :]
left_mean = left.mean()
right_mean = right.mean()

How to classify between a color image and grey scale image using opencv?

I have a use case where I need to classify some images as grey scale or color. My initial step was based on the feature that grey scale images should have r,g,b values at a pixel, the same values as it is single channel. Were as for color images, r,g,b values at the same pixel may not be the same.
So I am checking by getting the difference between (r,g), (b,g) and (r,b) and if all three has only zero, its grey scale else, its color.
This approach helped me to identify many grey scale images but still there are some images which does not follow this logic. Can anyone specify some good features on which we can classify an image as color or grey scale using opencv?
Do not ask me to check the number of channels and classify, it gives 3 for both the classes as we are loading it in .jpg format.
Thanks in advance
I suspect, some never were grey-scale images after digitizing (e.g. a color scan of gray-scale picture). Due to noise, there are minimal differences in the RGB values. A low threshold greater than perfect zero should do the trick.
Please note that JPEG totally has a gray-scale option. However, when storing the picture, you request that mode. Compressors usually do not pick it up automatically. Also, you explicitly need to set the flag IMREAD_UNCHANGED while reading with OpenCV's imread.
With the method Suggested by #QuangHoang I got a result of 85+% accuracy.
Here is the approach explained.
#test image
img=cv2.imread('test.jpg')
r,g,b=cv2.split(img)
#spliting b,g,r and getting differences between them
r_g=np.count_nonzero(abs(r-g))
r_b=np.count_nonzero(abs(r-b))
g_b=np.count_nonzero(abs(g-b))
diff_sum=float(r_g+r_b+g_b)
#finding ratio of diff_sum with respect to size of image
ratio=diff_sum/img.size
if ratio>0.005:
label='color'
else:
label='grey'
Thanks for all the suggestions.

Object (simple shapes) Detection in Image

I've got the following image.
Other Samples
I want to detect the six square-shaped green portions and the one circular portion above them. I basically want a binary image with these portions marked 1 (white) and everything else 0 (black).
What have I done so far?
I found a range of H, S, and V within which these colors fall which works fine for a single image, but I've got multiple such images, some under different illumination (brightness) conditions and the ranges do not work in those cases. What should I do to make the thresholding as invariant to brightness as possible? Is there a different approach I should take for thresholding?
What you did was manually analyze the values you need for thresholding for a specific image, and then apply that. What you see is that analysis done on one image doesn't necessarily fit other images.
The solution is to do the analysis automatically for each image. This can be achieved by creating a histogram for each of the channels, and if you're working in HSV, I'm guessing that the H channel would be pretty much useless in this case.
Anyway, once you have the histograms, you should analyze the threshold using something like Lloyd-Max, which is basically a K-Means type clustering of intensities. This should give the centroids for the intensity of the white background, and the other colors. Then you choose the threshold based on the cluster standard deviation.
For example, in the image you gave above, the histogram of the S channel looks like:
You can see the large blob near 0 is the white background that has the lowest saturation.

Categories

Resources