From this question How to convert Nifti file to Numpy array? , I created a 3D numpy array of nifti image. I made some modifications to this array, like I changed depth of the array by adding padding of zeroes. Now I want to convert this array back to nifti image, how can I do that?
I tried:
imga = Image.fromarray(img, 'RGB')
imga.save("modified/volume-20.nii")
but it doesn't identify nii extension. I also tried:
nib.save(img,'modified/volume-20.nii')
this is also not working, because img must be nibabel.nifti1.Nifti1Image if I want to use nib.save feature. In both of the examples above img is a 3D numpy array.
Assuming that you a numpy array and you want to use nib.save function, you need to first get the affine transformation.
Example:
# define the path to the data
func_filename = os.path.join(data_path, 'task-rest_bold.nii.gz')
# load the data
func = nib.load(func_filename)
# do computations that lead to a 3D numpy array called "output"
# bla bla bla
# output = np.array(....)
# to save this 3D (ndarry) numpy use this
ni_img = nib.Nifti1Image(output, func.affine)
nib.save(ni_img, 'output.nii.gz')
Now you will be able to overlay the output.nii.gz onto the task-rest_bold.nii.gz
Related
I have a dataset which comprises of the binary data of pixelated 50x50 images. The array shape is (50, 50, 90245). I want to reach 50x50 pixels of each of the 90245 images. How can I slice the array?
If data is the variable storing the image data, and i is the index of the image you want to access, then you can do:
data[:,:,i]
to get the desired image data.
If data is the variable storing the image data, and i is the index of the image you want to access, then you can do as #BrokenBenchmark suggested. In case you want a (50,50,1) 3D array as the output, you could do:
data[:,:,i:i+1]
to get the image as a 3D array.
Edit1: If you reshaped your data matrix to be of shape (90245,50,50), you can get the ith image by doing data[i,:,:] or just data[i] to get a (50,50) image. Similarly, to get a (1,50,50) image, you could do data[i:i+1,:,:] or just data[i:i+1].
Edit2: To reshape the array, you could use the swapaxes() function in numpy.
I have a greyscale image, represented by a 2D array of integers, shape (1000, 1000).
I then use sklearn.feature_extraction.image.extract_patches_2d() to generate an array of 3x3 'patches' from this image, resulting in an array of shape (1000000, 3, 3), as there are 1 million 3x3 arrays for each pixel value in the original image.
I reshape this to (1000, 1000, 3, 3), which is a 1000x1000 array of 3x3 arrays, one 3x3 array for each pixel in the original image.
I now want to effectively subtract the 2D array from the 4D array. I have already found a method to do this, but I would like to make one using vectorisation.
I currently iterate through each pixel and subtract the value there from the 3x3 array at the same index. This is a little bit slow.
This is what currently loads images, formats the arrays before hand, and then performs this subtraction.
from PIL import Image, ImageOps
from skimage import io
from sklearn.feature_extraction import image
import numpy
jitter = 1
patchsize = (jitter*2)+1
#load image as greyscale image using PIL
original = load_image_greyscale(filename)
#create a padded version of the image so that 1000x1000 patches are made
#instead of 998x998
padded = numpy.asarray(ImageOps.expand(original,jitter))
#extract these 3x3 patches using sklearn
patches = image.extract_patches_2d(padded,(patchsize,patchsize))
#convert image to numpy array
pixel_array = numpy.asarray(original)
#then reshape the array of patches so it matches array_image
patch_array = numpy.reshape(patches, (pixel_array.shape[0],pixel_array.shape[1],patchsize,patchsize))
#create a copy for results
patch_array_copy = numpy.copy(patch_array)
#iterate over each 3x3 array in the patch array and subtract the pixel value
#at the same index in the pixel array
for x in range(pixel_array.shape[0]):
for y in range(pixel_array.shape[1]):
patch_array_copy[x,y] = patch_array[x,y] - pixel_array[x,y]
I would like a way to perform the final step in the for loop using matrix operations.
I would also like to extend this at some point to work with RGB images, effectively making it a subtraction of an array with shape(1000,1000,3) from an array with shape(1000,1000,3,3,3). But i'm trying to go one step at a time here.
Any help or tips or suggestions or links to helpful resources would be greatly appreciated.
I am trying to load own image dataset from a folderwith two sub directories where all the images are 16bit png in RGB scale and the dimension of the images are (64*64). I am converting them to gray scale and forced the numpy array to have data type as uint16. It is returning me a list of images as (64*64) numpy arrays.
path="D:/PROJECT ___ CU/Images for 3D/imagedatanew/Training2/"
imageset=[]
image_labels=[]
for directory in os.listdir(path):
for file in os.listdir(path+directory):
print(path+directory+"/"+file)
img=Image.open(path+directory+"/"+file)
featurevector=numpy.array(img.convert("L"),dtype='uint16')
imageset.append(featurevector)
image_labels.append(directory)
But when I am trying to convert this list of 2D arrays into a 3D array, I cant do that.
im=numpy.array(imageset)
im.shape
>>> im.shape
>>> (207,) ##there are 207 images in total
I want the the array as (207,64,64)
and also when I run the im array, it returns me dtype as "object", which I cant understand
I'm trying to open an image with size (520,696) but when I use this
array = np.array([np.array(Image.open(folder_path+folders+'/'+'images'+'/'+image))], np.int32).shape`
I'm getting the shape as
(1, 520, 696, 4)
The problem is with this shape I can't convert it to image using toimage(array); I get
'arr' does not have a suitable array shape for any mode.
Any suggestions on how may I read that image using only (520,696)?
The problem is the additional dumb dimension. You can remove it using:
arr = np.squeeze(arr)
You should load the picture as a single picture instead of loading it as a stack and then removing the irrelevant stack dimension. The basic procedure could be something like this:
from PIL import Image
pic = Image.open("test.jpg")
pic.show() #yup, that's the picture
arr = np.array(pic) #convert it to a numpy array
print(arr.shape, arr.dtype) #dimension and data type
arr //= 2 #now manipulate this array
new_pic = Image.fromarray(arr) #and keep it for later
new_pic.save("newpic.bmp") #maybe in a different format
I am using Pillow and numpy, but have a problem with conversion between Pillow Image object and numpy array.
when I execute following code, the result is weird.
im = Image.open(os.path.join(self.img_path, ifname))
print im.size
in_data = np.asarray(im, dtype=np.uint8)
print in_data.shape
result is
(1024, 768)
(768, 1024)
Why dimension is changed?
im maybe column-major while arrays in numpy are row-major
do in_data = in_data.T to transpose the python array
probably should check in_data with matplotlib's imshow to make sure the picture looks right.
But do you know that matplotlib comes with its own loading functions that gives you numpy arrays directly? See: http://matplotlib.org/users/image_tutorial.html
If your image is greyscale do:
in_data = in_data.T
but if you are working with rbg images you want to make sure your transpose operation is along only two axis:
in_data = np.transpose(in_data, (1,0,2))
actually this is because most image libraries give you images that are transpozed compared to numpy arrays. this is (i think) because you write image files line by line, so the first index (let's say x) refers to the line number (so x is the vertical axis) and the second index (y) refers to the subsequent pixel in line (so y is the horizontal axis), which is against our everyday coordinates sense.
If you want to handle it correctly you need to remember to write:
image = library.LoadImage(path)
array = (library.FromImageToNumpyArray(image)).T
and consequently:
image = library.FromNumpyArrayToImage(array.T)
library.WriteImage(image, path)
Which works also for 3D images. But i'm not promising this is the case for ALL image libraries - just these i worked with.