I have a Nifti object generated from a directory of dicom files.
It seems that the Nifti should know how many frames it holds, but all I can find in the header info is the shape. The problem is, the shape is at times (num_images, x, y) and at times (x, y, num_images).
The only nibabel functions I found relevant where from the Ecat library. I am not familiar with ecat format, but I want my method to work for any nii file. I am working with the nibabel library.
Is there a way to retrieve the number of images in a Nifti file?
I'm guessing you're looking at fMRI, DTI or ASL data.
Say your 4D nii stack is called 'data.nii.'
Just go into that directory and do:
mri = nib.load('data.nii')
mri.shape
The fourth element you see will be the number of volumes. You can access it thusly: mri.shape[3] if you need it for some kind of purpose in your programs.
This works consistently for me. If your data are "stacked" in an inconsistent orientation, you are going to have to get fancy.
You could include checks based off of the dimensionality of your images. For example if you know that your images are 128x128x128, then you can go ahead and get whichever element of mri.shape isn't 128, but this approach is suboptimal for a few reasons.
Related
This question already has answers here:
Compare Images in Python
(4 answers)
Closed 1 year ago.
Im using python to check in hundred of websites if a web banner in GIF format exist.
I have a folder with gif files as examples and Im compairing the gif files from each web sites with my own examples files.
I use filecmp but I found that many webs compress the gif files so even the files are visually identical the filecmp wont detect as same.
Is there any python library to detect if two gif files or vídeos are similar even if the resolución has changed?
Comparing two images for similarity is a general image processing problem so the solution you develop can be as simple or complex as you want it to be. In your specific case, you'll need a method for making two images the same size and a method for comparing the images.
First, you'll probably want to convert the images to RGB or grayscale arrays for comparison.
I would suggest reducing the size of the larger image to the size of the smaller image. That is less likely to introduce artifacts than increasing the size of the smaller image. Resizing can be accomplished with the Python Pillow library.
Image.resize(size, resample=None, box=None, reducing_gap=None)
https://pillow.readthedocs.io/en/stable/reference/Image.html
The resampling method may have some small effect on the similarity measure. However, you'll probably be fine just using resample = NEAREST.
After making sure the images are the same size, they must be compared. One could compare them using mean squared error (MSE) or structural similarity (SSIM). Luckily, SSIM is already implemented in scikit-image.
from skimage.metrics import structural_similarity as ssim
s = ssim(imageA, imageB)
https://www.pyimagesearch.com/2014/09/15/python-compare-two-images/?_ga=2.129476107.608689084.1632687977-376301729.1627364626
In your case, MSE might work just as well. However, if the average brightness of the image had been changed by some kind of process, you'd want to first subtract that from each image.
If resizing is the only issue, that should be it. If, however, the images may have been flipped, rotated, or cropped, additional steps might be necessary.
I am currently working on a summer research project and we have generated 360 slices of a tumor. I now need to compile (if that's the right word) these images into one large 3D image. Is there a way to do this with either a python module or an outside source? I would prefer to use a free software if that is possible.
Perhaps via matplotlib, but anyway may require preprocessing I suppose:
https://www.youtube.com/watch?v=5E5mVVsrwZw
In your case, the z axis (3rd dimension) should be specified by your vector of images. Nonetheless, before proceeding, I suppose you would need to extract the shapes of the object you want to reconstruct. For instance, if i take any image of the many 2D you have, I expect to find RGB value for each pixel, but then, for instance if you want to plot a skull like in the video link, as I understand you would need to extract the borders of your object and from each of its frame (2D shape) and then plot their serie. But anyway, the processing may depend on the encoding of the information you have. Perhaps is sufficient to simply plot the series of images.
Some useful link I found:
https://www.researchgate.net/post/How_to_reconstruct_3D_images_from_two_or_four_2D_images
Python: 3D contour from a 2D image - pylab and contourf
I have two 2D arrays. One consist of reference data and the other measured data. While the matrices have the same shape the measured data will not be perfectly centered; meaning, the sample may not have been perfectly aligned with the detector. It could be rotated or translated. I would like to align the matrices based on the features much like image registration. I'm hoping someone can point me in the direction of a python package capable of this or let me know if opencv can do this for numpy arrays with arbitrary values that do not fit the mold of a typical .png or .jpg file.
I have aligned images using opencv image registration functions. I have attempted to convert my arrays to images using PIL with the intent being to use the image registration functions within opencv. If needed I can post my sample code but at this point I want to know if there is a package with functions capable of doing this.
I am in the process of porting a code I wrote in IDL (interactive data language) to python but am running into a bit of a problem that I am hoping someone can help me with.
The code goes like this:
take individual classified Landsat geotiffs (say there are N individual 1-band files per scene, each representing a different day) and further reduce these images to three binary-themed 1-band images (water and not water, land and not land, water/land and not water/land). This will be done by reading the rasters as matrices and replacing values.
** I don't actually need to have these images, so I can save them as memory or just keep them as numpy ndarrays to move to the next step
stack these images/arrays to produce 3 different (1 for each 'element') N-band stacks (or a 3-dimensional array-- (samples, lines, N)) for each scene
total the stacks to get a total number of water/land/water&land observations per pixel (produces one 1-band total image for each scene)
other stuff
The problem I am running into is when I get to the stacking, as the individual images for each scene vary in size, although they mostly overlap with each other. I originally used an ENVI layer-stacking routine that takes the N different-sized 1-band images for each scene and stacks them into an N-band image with an extent that encompasses all of the images' extents, and then reading the resulting rasters in as 3-d arrays to do the totals. I would like to do something similar with gdal/python but am not sure how to go about doing so. I was thinking I would implement gdal capabilities of geotiffs by using the geotransform info of the images to somehow find the inclusive extent, possibly padding the edges of the images with 0's so they are all the same size, stacking these images/3-d arrays so that they are correctly aligned, then computing the totals. Hopefully there is something more direct in gdal (or in any other open source package for python), as I'm not sure how I would pull that off.
Does anyone have any suggestions or ideas as to what would be the most efficient way (or any way really), to do what I need to do? I'm open to anything.
Thanks so much,
Maggie
I have a script to save between 8 and 12 images to a local folder. These images are always GIFs. I am looking for a python script to combine all the images in that one specific folder into one image. The combined 8-12 images would have to be scaled down, but I do not want to compromise the original quality(resolution) of the images either (ie. when zoomed in on the combined images, they would look as they did initially)
The only way I am able to do this currently is by copying each image to power point.
Is this possible with python (or any other language, but preferably python)?
As an input to the script, I would type in the path where only the images are stores (ie. C:\Documents and Settings\user\My Documents\My Pictures\BearImages)
EDIT: I downloaded ImageMagick and have been using it with the python api and from the command line. This simple command worked great for what I wanted: montage "*.gif" -tile x4 -geometry +1+1 -background none combine.gif
If you want to be able to zoom into the images, you do not want to scale them. You'll have to rely on the image viewer to do the scaling as they're being displayed - that's what PowerPoint is doing for you now.
The input images are GIF so they all contain a palette to describe which colors are in the image. If your images don't all have identical palettes, you'll need to convert them to 24-bit color before you combine them. This means that the output can't be another GIF; good options would be PNG or JPG depending on whether you can tolerate a bit of loss in the image quality.
You can use PIL to read the images, combine them, and write the result. You'll need to create a new image that is the size of the final result, and copy each of the smaller images into different parts of it.
You may want to outsource the image manipulation part to ImageMagick. It has a montage command that gets you 90% of the way there; just pass it some options and the names of the files in the directory.
Have a look at Python Imaging Library.
The handbook contains several examples on both opening files, combining them and saving the result.
The easiest thing to do is turn the images into numpy matrices, and then construct a new, much bigger numpy matrix to house all of them. Then convert the np matrix back into an image. Of course it'll be enormous, so you may want to downsample.