Resizing .nii file in Python - python

I am trying to resize Nii files so that my program takes less computational resources, I want to rescale them from (240,240,155) to (120,120,155). I have tried using nilearn.image.resample_img module to do so however as it is seen the image below the output is not what I would expect. Need some help to figure this out.

Related

Save PSD to JPG without invisible layers

I would like to export a PSD file for which we put some group layers as invisible to JPG.
Currently, I tried to put the concerned group layers as invisible (by looping through the PSD layers and put the concerned group as invisible like group.visible = False) and then save that PSD.
The saved new PSD does have the concerned group layers invisible.
Later, the new PSD is converted to JPG.
However, the JPG output shows also the invisible layers.
The python code used for passing from the new saved PSD to JPG is like the same for saving (we used psd_tools).
from psd_tools import PSDImage
image= PSDImage.open(PSDFilePath)
image.save(outputPath, "JPEG")
I have also tried with the command line "convert" on linux but it has also showed the invisible layers after conversion.
So, my question is to know whether there's a way to either remove invisible layers before saving to JPG within the same script without calling a script inside PhotoShop (which requires to open instances of PhotoShop) or export it to JPG without removing with a python code or maybe a command line.
I have found the last days something that does the trick from this StackOverflow Post by adding composite(force=True)
from psd_tools import PSDImage
image= PSDImage.open(PSDFilePath)
image.composite(force=True).save(outputPath) #outputPath is expected to be a JPG file
This is pretty good for light files. However, when the PSD's size is very large like 1GB, it takes too much time.
As I would like to execute this operation on a daily basis on about > 1000 files, this would take days to be done.
So, I am always looking for another solution.
Here's a sample to a lighter file, unfortunately, I could not put the real one for professional reasons.
https://file.io/yxdDxlzMeMsA

"imagio.imsave" vs "imageio.core.util.Array.tofile"

I am expanding my limited Python knowledge by converting some MATLAB image analysis code to Python. I am following Image manipulation and processing using Numpy and Scipy. The code in Section 2.6.1 saves an image using both imageio.imsave and face.tofile, where type(face)=<class 'imageio.core.util.Array>'.
I am trying to understand why there are two ways to export an image. I tried web-searching tofile, but got numpy.ndarray.tofile. It's very sparse, and doesn't seem to be specific to images. I also looked for imageio.core.util.Array.tofile, but wasn't able to find anything.
Why are there two ways to export files? And why does imageio.core.util.Array.tofile seem to be un-findable online?
The difference is in what the two functions write in the file.
imageio.imsave() saves a conventional image, like a picture or photo, in JPEG/PNG format that can be viewed with an image viewer like GIMP, feh, eog, Photoshop or MSxPaint.
tofile() saves in a Numpy-compatible format that only Numpy (and a small number of other Python tools) use.

How to get the image torchvision.utils.save_image saves, without reading it back from disk?

from torchvision.utils import save_image
...
save_image(im, f'im_name.png')
In my case (standard mnist), using code from here, im is a Tensor:96, and save_image works.
I want that image in memory to show it in other plots, and I don't want to read it back after saving it, which seems kind of stupid.
Is there a way to separate the functionality of generating the image and of saving it?
Edit
clarification:
I want an equivalent to
save_image(im, f'im_name.png')
reread = plt.imread(f'im_name.png')
without saving the image and reading it back.
I just want the image, and I want to save it later.
the save_image function does some work, like stacking multiple images into one, converting the tensor to images of correct sizes and so on. I want only that part without the saving to disk.
About 2 weeks later, I stumbled upon the solution by accident.
grid = torchvision.utils.make_grid(im)
grid will be the image save_image was just saving.

Reading tiffs in opencv swaps top and bottom third of image

I've got a pretty strange issue. I have several tif images of astronomical objects. I'm trying to use opencv's python bindings to process them. Upon reading the image file, it appears that segments of the images are swapped or rotated. I've stripped it down to the bare minimum, and it still reproduces:
img = cv2.imread('image.tif', 0)
cv2.imwrite('image_unaltered.tif', img)
I've uploaded some samples to imgur, to show the effect. The images aren't super clear, that's the nature of preprocessed astronomical images, but you can see it:
First set:
http://imgur.com/vXzRQvS
http://imgur.com/wig99KR
Second set:
http://imgur.com/pf7tnPz
http://imgur.com/xGn9C77
The same rotated/swapped images appear if I use cv2.imShow(...) as well, so I believe it's something when I read the file. Furthermore, it persists if I save as jpg as well. Opening the original in Photoshop shows the correct image. I'm using opencv 2.4.10, on Linux Mint 17.1. If it matters, the original tifs were created with FITS liberator on windows.
Any idea what's happening here?

Python: Import multiple images from a folder and scale/combine them into one image?

I have a script to save between 8 and 12 images to a local folder. These images are always GIFs. I am looking for a python script to combine all the images in that one specific folder into one image. The combined 8-12 images would have to be scaled down, but I do not want to compromise the original quality(resolution) of the images either (ie. when zoomed in on the combined images, they would look as they did initially)
The only way I am able to do this currently is by copying each image to power point.
Is this possible with python (or any other language, but preferably python)?
As an input to the script, I would type in the path where only the images are stores (ie. C:\Documents and Settings\user\My Documents\My Pictures\BearImages)
EDIT: I downloaded ImageMagick and have been using it with the python api and from the command line. This simple command worked great for what I wanted: montage "*.gif" -tile x4 -geometry +1+1 -background none combine.gif
If you want to be able to zoom into the images, you do not want to scale them. You'll have to rely on the image viewer to do the scaling as they're being displayed - that's what PowerPoint is doing for you now.
The input images are GIF so they all contain a palette to describe which colors are in the image. If your images don't all have identical palettes, you'll need to convert them to 24-bit color before you combine them. This means that the output can't be another GIF; good options would be PNG or JPG depending on whether you can tolerate a bit of loss in the image quality.
You can use PIL to read the images, combine them, and write the result. You'll need to create a new image that is the size of the final result, and copy each of the smaller images into different parts of it.
You may want to outsource the image manipulation part to ImageMagick. It has a montage command that gets you 90% of the way there; just pass it some options and the names of the files in the directory.
Have a look at Python Imaging Library.
The handbook contains several examples on both opening files, combining them and saving the result.
The easiest thing to do is turn the images into numpy matrices, and then construct a new, much bigger numpy matrix to house all of them. Then convert the np matrix back into an image. Of course it'll be enormous, so you may want to downsample.

Categories

Resources