Fail to convert List of ndarrays to numpy array - python

I am reading thousands of images (all three channels), one by one, in form of a numpy ndarray and append them to a list. At the end I want to convert this list into a numpy array:
import numpy as np
from PIL import Image
def read_image_path(path, img_size=227):
img = Image.open(path)
img = np.array(img.resize([img_size, img_size]))
return img
I read each image path from a dictionary that looks like:
{1:{'img_path': 'path-to-image', 'someOtherKeys':'...'}, 2:{...}}
images = []
for key in key:
img = read_image_path(dataset_dictionary[key]['img_path'])
images.append(img)
Up to here it's all fine. I have a list of ndarray image matrices of size (227,227,3). But when I try to convert "images" to numpy array and return it from the function, it gives the following error:
return np.array(images)
return np.array(images)
ValueError: could not broadcast input array from shape (227,227,3) into shape (227,227)
I will be grateful to have anyone's idea about this.

Most likely you have a img (or images) which has the shape of (227,227) instead of (227,227,3).
The following code should tell you which image is the offender.
for key in key:
img = read_image_path(dataset_dictionary[key]['img_path'])
if img.shape != (227,227,3):
print(key)

Related

Why is it showing different images in 4d numpy array?

I'm trying to create a 4D array for a bunch of 3D images. I can load the image and show the image correctly, but after storing it to the 4D array and show the image from the array, it shows gibberish.
I tried to compare if the image loaded and the one read from the 4D array is equal, and it prints True.
import os
from glob import glob
import numpy as np
from PIL import Image
IMG_PATH = '32x32'
img_paths = glob(os.path.join(IMG_PATH, '*.jpg'))
images = np.empty((len(img_paths), 32, 32, 3))
for i, path_i in enumerate(img_paths):
img_i = np.array(Image.open(path_i))
Image.fromarray(img_i, 'RGB').show() # showing correct image
images[i] = img_i
Image.fromarray(images[i], 'RGB').show() # showing gibberish
print(np.array_equal(img_i, images[i])) # True
if i == 0:
break
I expect to show the exact same image as I run images[i] = img_i.
This line is performing a cast:
images[i] = img_i
Since images.dtype == np.float64, but img_i.dtype is probably np.uint8.
You can catch this type of mistake by specifying a casting rule:
np.copy_to(images[i], img_i, casting='no')
# TypeError: Cannot cast scalar from dtype('uint8') to dtype('float64') according to the rule 'no'
You can fix this by allocating the array with the right type:
images = np.empty((len(img_paths), 32, 32, 3), dtype=np.uint8)
Or you can let numpy do the allocation for you, but this will temporarily use almost twice the memory:
images = np.stack([
Image.open(path_i)
for path_i in img_paths
], axis=0)

Load a 3D .mat file as a grayscale numpy array

I want to convert a 3D image into a numpy array, making sure to preserve the fact that it is a grayscale image.
I have created an empty array in which I would like to load these images into where x,y,z, are the respective dimensions and channels = 1.
img_array = np.ndarray((len(directory), x, y, z, channels), dtype=np.uint8
This is the code I used to convert the original .mat files to an array and load each one into the empty array I just created
i=0
for array in directory:
OG = loadmat(array, appendmat = True) #load the mat file
OG = OG['new_OG'] #get the actual image from the list
OG =np.array(OG) #convert to an array
img_array[i] = OG #append to the empty array
However, when I try to append it to img_array in the last line, it doesn't work because of the following error:
ValueError: could not broadcast input array from shape (80,84,80) into shape (80,84,80,1)
So how can I make sure I turn the .mat file into a numpy array with the shape I need: (x,y,z,1)?

How can i transform my Boolean list of lists to 1s and 0s? [duplicate]

How do I convert a PIL Image back and forth to a NumPy array so that I can do faster pixel-wise transformations than PIL's PixelAccess allows? I can convert it to a NumPy array via:
pic = Image.open("foo.jpg")
pix = numpy.array(pic.getdata()).reshape(pic.size[0], pic.size[1], 3)
But how do I load it back into the PIL Image after I've modified the array? pic.putdata() isn't working well.
You're not saying how exactly putdata() is not behaving. I'm assuming you're doing
>>> pic.putdata(a)
Traceback (most recent call last):
File "...blablabla.../PIL/Image.py", line 1185, in putdata
self.im.putdata(data, scale, offset)
SystemError: new style getargs format but argument is not a tuple
This is because putdata expects a sequence of tuples and you're giving it a numpy array. This
>>> data = list(tuple(pixel) for pixel in pix)
>>> pic.putdata(data)
will work but it is very slow.
As of PIL 1.1.6, the "proper" way to convert between images and numpy arrays is simply
>>> pix = numpy.array(pic)
although the resulting array is in a different format than yours (3-d array or rows/columns/rgb in this case).
Then, after you make your changes to the array, you should be able to do either pic.putdata(pix) or create a new image with Image.fromarray(pix).
Open I as an array:
>>> I = numpy.asarray(PIL.Image.open('test.jpg'))
Do some stuff to I, then, convert it back to an image:
>>> im = PIL.Image.fromarray(numpy.uint8(I))
Source: Filter numpy images with FFT, Python
If you want to do it explicitly for some reason, there are pil2array() and array2pil() functions using getdata() on this page in correlation.zip.
I am using Pillow 4.1.1 (the successor of PIL) in Python 3.5. The conversion between Pillow and numpy is straightforward.
from PIL import Image
import numpy as np
im = Image.open('1.jpg')
im2arr = np.array(im) # im2arr.shape: height x width x channel
arr2im = Image.fromarray(im2arr)
One thing that needs noticing is that Pillow-style im is column-major while numpy-style im2arr is row-major. However, the function Image.fromarray already takes this into consideration. That is, arr2im.size == im.size and arr2im.mode == im.mode in the above example.
We should take care of the HxWxC data format when processing the transformed numpy arrays, e.g. do the transform im2arr = np.rollaxis(im2arr, 2, 0) or im2arr = np.transpose(im2arr, (2, 0, 1)) into CxHxW format.
You need to convert your image to a numpy array this way:
import numpy
import PIL
img = PIL.Image.open("foo.jpg").convert("L")
imgarr = numpy.array(img)
Convert Numpy to PIL image and PIL to Numpy
import numpy as np
from PIL import Image
def pilToNumpy(img):
return np.array(img)
def NumpyToPil(img):
return Image.fromarray(img)
The example, I have used today:
import PIL
import numpy
from PIL import Image
def resize_image(numpy_array_image, new_height):
# convert nympy array image to PIL.Image
image = Image.fromarray(numpy.uint8(numpy_array_image))
old_width = float(image.size[0])
old_height = float(image.size[1])
ratio = float( new_height / old_height)
new_width = int(old_width * ratio)
image = image.resize((new_width, new_height), PIL.Image.ANTIALIAS)
# convert PIL.Image into nympy array back again
return array(image)
If your image is stored in a Blob format (i.e. in a database) you can use the same technique explained by Billal Begueradj to convert your image from Blobs to a byte array.
In my case, I needed my images where stored in a blob column in a db table:
def select_all_X_values(conn):
cur = conn.cursor()
cur.execute("SELECT ImageData from PiecesTable")
rows = cur.fetchall()
return rows
I then created a helper function to change my dataset into np.array:
X_dataset = select_all_X_values(conn)
imagesList = convertToByteIO(np.array(X_dataset))
def convertToByteIO(imagesArray):
"""
# Converts an array of images into an array of Bytes
"""
imagesList = []
for i in range(len(imagesArray)):
img = Image.open(BytesIO(imagesArray[i])).convert("RGB")
imagesList.insert(i, np.array(img))
return imagesList
After this, I was able to use the byteArrays in my Neural Network.
plt.imshow(imagesList[0])
I can vouch for svgtrace, I found it both super simple and relatively fast. Find it here: https://pypi.org/project/svgtrace/
This is how I used it:
from svgtrace import trace
asset_path = 'image.png'
save_path = 'traced_image.svg'
Path(save_path).write_text(trace(asset_path), encoding='utf-8')
It took an average of 3 seconds for a 1080x1080px image on my machine. (MacBook Pro 2017)
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
You can transform the image into numpy
by parsing the image into numpy() function after squishing out the features( unnormalization)

Create Numpy array of images

I have some (950) 150x150x3 .jpg image files that I want to read into an Numpy array.
Following is my code:
X_data = []
files = glob.glob ("*.jpg")
for myFile in files:
image = cv2.imread (myFile)
X_data.append (image)
print('X_data shape:', np.array(X_data).shape)
The output is (950, 150). Please let me know why the list is not getting converted to np.array correctly and whether there is a better way to create the array of images.
Of what I have read, appending to numpy arrays is easier done through python lists and then converting them to arrays.
EDIT: Some more information (if it helps), image.shape returns (150,150,3) correctly.
I tested your code. It works fine for me with output
('X_data shape:', (4, 617, 1021, 3))
however, all images were exactly the same dimension.
When I add another image with different extents I have this output:
('X_data shape:', (5,))
So I'd recommend checking the sizes and the same number of channels (as in are really all images coloured images)? Also you should check if either all images (or none) have alpha channels (see #Gughan Ravikumar's comment)
If only the number of channels vary (i.e. some images are grey), then force loading all into the color format with:
image = cv2.imread (myFile, cv2.IMREAD_COLOR)
EDIT:
I used the very code from the question, only replaced with a directory of mine (and "*.PNG"):
import cv2
import glob
import numpy as np
X_data = []
files = glob.glob ("C:/Users/xxx/Desktop/asdf/*.PNG")
for myFile in files:
print(myFile)
image = cv2.imread (myFile)
X_data.append (image)
print('X_data shape:', np.array(X_data).shape)
Appending images in a list and then converting it into a numpy array, is not working for me. I have a large dataset and RAM gets crashed every time I attempt it. Rather I append the numpy array, but this has its own cons. Appending into list and then converting into np array is space complex, but appending a numpy array is time complex. If you are patient enough, this will take care of RAM crasing problems.
def imagetensor(imagedir):
for i, im in tqdm(enumerate(os.listdir(imagedir))):
image= Image.open(im)
image= image.convert('HSV')
if i == 0:
images= np.expand_dims(np.array(image, dtype= float)/255, axis= 0)
else:
image= np.expand_dims(np.array(image, dtype= float)/255, axis= 0)
images= np.append(images, image, axis= 0)
return images
I am looking for better implementations that can take care of both space and time. Please comment if someone has a better idea.
Here is a solution for images that have certain special Unicode characters, or if we are working with PNGs with a transparency layer, which are two cases that I had to handle with my dataset. In addition, if there are any images that aren't of the desired resolution, they will not be added to the Numpy array. This uses the Pillow package instead of cv2.
resolution = 150
import glob
import numpy as np
from PIL import Image
X_data = []
files = glob.glob(r"D:\Pictures\*.png")
for my_file in files:
print(my_file)
image = Image.open(my_file).convert('RGB')
image = np.array(image)
if image is None or image.shape != (resolution, resolution, 3):
print(f'This image is bad: {myFile} {image.shape if image is not None else "None"}')
else:
X_data.append(image)
print('X_data shape:', np.array(X_data).shape)
# If you have 950 150x150 images, this would print 'X_data shape: (950, 150, 150, 3)'
If you aren't using Python 3.6+, you can replace the r-string with a regular string (except with \\ instead of \, if you're using Windows), and the f-string with regular string interpolation.
Your definition for the .JPG frame that will be put into a matrix of the same size should should be x, y, R, G, B, A. "A" is not used, but it does take up 8 bits at the end of each pixel.

How do I convert a PIL Image into a NumPy array?

How do I convert a PIL Image back and forth to a NumPy array so that I can do faster pixel-wise transformations than PIL's PixelAccess allows? I can convert it to a NumPy array via:
pic = Image.open("foo.jpg")
pix = numpy.array(pic.getdata()).reshape(pic.size[0], pic.size[1], 3)
But how do I load it back into the PIL Image after I've modified the array? pic.putdata() isn't working well.
You're not saying how exactly putdata() is not behaving. I'm assuming you're doing
>>> pic.putdata(a)
Traceback (most recent call last):
File "...blablabla.../PIL/Image.py", line 1185, in putdata
self.im.putdata(data, scale, offset)
SystemError: new style getargs format but argument is not a tuple
This is because putdata expects a sequence of tuples and you're giving it a numpy array. This
>>> data = list(tuple(pixel) for pixel in pix)
>>> pic.putdata(data)
will work but it is very slow.
As of PIL 1.1.6, the "proper" way to convert between images and numpy arrays is simply
>>> pix = numpy.array(pic)
although the resulting array is in a different format than yours (3-d array or rows/columns/rgb in this case).
Then, after you make your changes to the array, you should be able to do either pic.putdata(pix) or create a new image with Image.fromarray(pix).
Open I as an array:
>>> I = numpy.asarray(PIL.Image.open('test.jpg'))
Do some stuff to I, then, convert it back to an image:
>>> im = PIL.Image.fromarray(numpy.uint8(I))
Source: Filter numpy images with FFT, Python
If you want to do it explicitly for some reason, there are pil2array() and array2pil() functions using getdata() on this page in correlation.zip.
I am using Pillow 4.1.1 (the successor of PIL) in Python 3.5. The conversion between Pillow and numpy is straightforward.
from PIL import Image
import numpy as np
im = Image.open('1.jpg')
im2arr = np.array(im) # im2arr.shape: height x width x channel
arr2im = Image.fromarray(im2arr)
One thing that needs noticing is that Pillow-style im is column-major while numpy-style im2arr is row-major. However, the function Image.fromarray already takes this into consideration. That is, arr2im.size == im.size and arr2im.mode == im.mode in the above example.
We should take care of the HxWxC data format when processing the transformed numpy arrays, e.g. do the transform im2arr = np.rollaxis(im2arr, 2, 0) or im2arr = np.transpose(im2arr, (2, 0, 1)) into CxHxW format.
You need to convert your image to a numpy array this way:
import numpy
import PIL
img = PIL.Image.open("foo.jpg").convert("L")
imgarr = numpy.array(img)
Convert Numpy to PIL image and PIL to Numpy
import numpy as np
from PIL import Image
def pilToNumpy(img):
return np.array(img)
def NumpyToPil(img):
return Image.fromarray(img)
The example, I have used today:
import PIL
import numpy
from PIL import Image
def resize_image(numpy_array_image, new_height):
# convert nympy array image to PIL.Image
image = Image.fromarray(numpy.uint8(numpy_array_image))
old_width = float(image.size[0])
old_height = float(image.size[1])
ratio = float( new_height / old_height)
new_width = int(old_width * ratio)
image = image.resize((new_width, new_height), PIL.Image.ANTIALIAS)
# convert PIL.Image into nympy array back again
return array(image)
If your image is stored in a Blob format (i.e. in a database) you can use the same technique explained by Billal Begueradj to convert your image from Blobs to a byte array.
In my case, I needed my images where stored in a blob column in a db table:
def select_all_X_values(conn):
cur = conn.cursor()
cur.execute("SELECT ImageData from PiecesTable")
rows = cur.fetchall()
return rows
I then created a helper function to change my dataset into np.array:
X_dataset = select_all_X_values(conn)
imagesList = convertToByteIO(np.array(X_dataset))
def convertToByteIO(imagesArray):
"""
# Converts an array of images into an array of Bytes
"""
imagesList = []
for i in range(len(imagesArray)):
img = Image.open(BytesIO(imagesArray[i])).convert("RGB")
imagesList.insert(i, np.array(img))
return imagesList
After this, I was able to use the byteArrays in my Neural Network.
plt.imshow(imagesList[0])
I can vouch for svgtrace, I found it both super simple and relatively fast. Find it here: https://pypi.org/project/svgtrace/
This is how I used it:
from svgtrace import trace
asset_path = 'image.png'
save_path = 'traced_image.svg'
Path(save_path).write_text(trace(asset_path), encoding='utf-8')
It took an average of 3 seconds for a 1080x1080px image on my machine. (MacBook Pro 2017)
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
You can transform the image into numpy
by parsing the image into numpy() function after squishing out the features( unnormalization)

Categories

Resources