PIL returns IndexError: tuple index out of range when converting a 1D numpy array into an PIL image object.
I am trying to covert a 1D Numpy Array of length 2048 having value between 0 and 255 into an image using PIL. I think this is an issue with my array being 1D. I have also tried converting a random 1D array integer to an image and I get the same error.
Random integer example:
from PIL import Image
import numpy as np
arr = np.random.randint(255, size=(2048))
arr = arr.astype('uint8')
img = Image.fromarray(arr, 'L')
img.show()
I would expect the code to show an image of a singe line of pixels having varying shades of gray.
When I tried to run your code, the problem was just that your array was a 1D array. So try:
arr2d = arr.reshape(-1,1)
Image.fromarray(arr2d,'L').show()
The input array has to be 2D, even if one dimension is 1. You just need to decide if you want the image to be a horizontal or vertical row of pixels, and add a dimension when creating your array.
arr = np.random.randint(255, size=(2048, 1)) # vertical image
arr = np.random.randint(255, size=(2048, 1)) # horizontal image
Related
I am trying to shuffle the pixel positions in image to get encrypted(distorted) image and decrypt the image using the original position in python. This is what i got from GPT and the shuffled images appear to be black.
from PIL import Image
import numpy as np
# Load the image
img = Image.open('test.png')
# Convert the image to a NumPy array
img_array = np.array(img)
# Flatten the array
flat_array = img_array.flatten()
# Create an index array that records the original pixel positions
index_array = np.arange(flat_array.shape[0])
# Shuffle the 1D arrays using the same random permutation
shuffled_index_array = np.random.permutation(index_array)
shuffled_array = flat_array[shuffled_index_array]
# Reshape the shuffled 1D array to the original image shape
shuffled_img_array = shuffled_array.reshape(img_array.shape)
# Convert the NumPy array to PIL image
shuffled_img = Image.fromarray(shuffled_img_array)
# Save the shuffled image
shuffled_img.save('shuffled_image.png')
# Save the shuffled index array as integers to a text file
np.savetxt('shuffled_index_array.txt', shuffled_index_array.astype(int), fmt='%d')
# Load the shuffled index array from the text file
shuffled_index_array = np.loadtxt('shuffled_index_array.txt', dtype=int)
# Rearrange the shuffled array using the shuffled index array
reshuffled_array = shuffled_array[shuffled_index_array]
# Reshape the flat array to the original image shape
reshuffled_img_array = reshuffled_array.reshape(img_array.shape)
# Convert the NumPy array to PIL image
reshuffled_img = Image.fromarray(reshuffled_img_array)
# Save the reshuffled image
reshuffled_img.save('reshuffled_image.png')
I'm trying to shuffle the pixel positions in an image but im stuck with what is wrong going on here.
You are really just missing a reversion of the permutation performed by numpy in the line np.random.permutation(index_array). That can be obtained by changing the line creating the reshuffled array to the following
# Rearrange the shuffled array using the shuffled index array
reshuffled_array = shuffled_array[np.argsort(shuffled_index_array)]
An explanation for reversion can be found here: Inverting permutations in Python
I have a list of PIL images: p0, p1, ..., p85999 (a total of 86000 of them). They are all RGB, of size 30x30px.
I need to convert them to normalized numpy arrays, I did the following:
[np.asarray(r).astype('float32') / 255.0) for r in images]
where r is a PIL image.
This gives an array of numpy arrays.
However, these arrays are sometimes of shape (30,30,3) and sometimes of shape (30,30).
I want them always to be of shape (30,30,3).
I'm guessing numpy does this for performance reasons (when RGB is not
needed, eg. white images?).
Anyway, how to get the desired result - get all numpy arrays to be of size (30,30,3)?
Also, ideally I would want my final numpy array to be of size (30, 30, 3, 86000). Is there a shortcut to create such an array straight from PIL images?
I'm guessing numpy does this for performance reasons
Numpy has nothing to do with it, this is your PIL Image having one channel only.
The simplest solution is to just convert everything to RGB:
ims = [np.asarray(r.convert('RGB')).astype('float32') / 255.0) for r in images]
If you then call np.asarray(ims), you'll obtain an array of shape [N,30,30,3] where N is the number of images, which you can then transpose to your desired ordering.
I have a greyscale image, represented by a 2D array of integers, shape (1000, 1000).
I then use sklearn.feature_extraction.image.extract_patches_2d() to generate an array of 3x3 'patches' from this image, resulting in an array of shape (1000000, 3, 3), as there are 1 million 3x3 arrays for each pixel value in the original image.
I reshape this to (1000, 1000, 3, 3), which is a 1000x1000 array of 3x3 arrays, one 3x3 array for each pixel in the original image.
I now want to effectively subtract the 2D array from the 4D array. I have already found a method to do this, but I would like to make one using vectorisation.
I currently iterate through each pixel and subtract the value there from the 3x3 array at the same index. This is a little bit slow.
This is what currently loads images, formats the arrays before hand, and then performs this subtraction.
from PIL import Image, ImageOps
from skimage import io
from sklearn.feature_extraction import image
import numpy
jitter = 1
patchsize = (jitter*2)+1
#load image as greyscale image using PIL
original = load_image_greyscale(filename)
#create a padded version of the image so that 1000x1000 patches are made
#instead of 998x998
padded = numpy.asarray(ImageOps.expand(original,jitter))
#extract these 3x3 patches using sklearn
patches = image.extract_patches_2d(padded,(patchsize,patchsize))
#convert image to numpy array
pixel_array = numpy.asarray(original)
#then reshape the array of patches so it matches array_image
patch_array = numpy.reshape(patches, (pixel_array.shape[0],pixel_array.shape[1],patchsize,patchsize))
#create a copy for results
patch_array_copy = numpy.copy(patch_array)
#iterate over each 3x3 array in the patch array and subtract the pixel value
#at the same index in the pixel array
for x in range(pixel_array.shape[0]):
for y in range(pixel_array.shape[1]):
patch_array_copy[x,y] = patch_array[x,y] - pixel_array[x,y]
I would like a way to perform the final step in the for loop using matrix operations.
I would also like to extend this at some point to work with RGB images, effectively making it a subtraction of an array with shape(1000,1000,3) from an array with shape(1000,1000,3,3,3). But i'm trying to go one step at a time here.
Any help or tips or suggestions or links to helpful resources would be greatly appreciated.
I have a problem creating a grey-scale image from existing 2D array in python.
Supposing I have a 2D array, X at dimension 504x896 data type is unit8. How could I create a gray-scale image from this 2D array? Will OpenCV provide an function for it or do I have to include other image processing library?
I have tried this but it did not work for some reason. Given a 2D array z
dim = z.shape
h = dim[0]
w = dim[1]
copy_image = np.zeros((h,w,1), np.uint8)
copy_image = z.copy();
cv2.imwrite("cpImage.bmp",copy_image)
This should work as expected. I have used Float32 as both arguments cv.CvtColor must have the same depth.
import numpy as np
import cv2
nArray = np.zeros((504, 896), np.float32) #Create the arbitrary input img
h,w = nArray.shape
cArray1 = cv2.CreateMat(h, w, cv2.CV_32FC3)
cArray2 = cv2.fromarray(nArray)
cv2.CvtColor(cArray2, cArray1, cv2.CV_GRAY2BGR)
cv2.imwrite("cpImage.bmp", cArray1)
Below is a simple section of code used to access an image using PIL, convert to a numpy array and then print the number of elements in the array.
The image in question is here - - and consists of exactly 100 pixels (10x10). However, the numpy array contains 300 elements (where I would expect 100 elements). What am I doing wrong?
import numpy as np
import PIL
impath = 'C:/Users/Ricky/Desktop/testim.tif'
im = PIL.Image.open(impath)
arr = np.array(im)
print arr.size #300
Every image can be composed by 3 bands (Red-Green-Blue or RGB composition).
Since your image is a black/white image those three bands are the same. You can see the difference using a colored image.
Try this to see what I mean:
import matplotlib.pyplot as pyplot
# this line above import a matplotlib library for plotting image
import numpy as np
import PIL
impath = 'C:/Users/Ricky/Desktop/testim.tif'
im = PIL.Image.open(impath)
arr = np.array(im)
print arr.shape # (10, 10, 3)
print arr[:, : ,0].size # 100
# next lines actually show the image
pyplot.imshow(arr[:, : ,0], cmap='gray')
pyplot.show()