keras reshape input image to work with CNN - python

There are other post with similar questions but none of the answers are helping me. I´m new to this CNN world.
I followed this tutorial for training a CNN with Keras using theano as BackEnd with the MNIST dataset. Now I want to pass to the CNN my own jpg image but I dont know how to reshape it. Can you help me please? Im super new at this.
So far, I tried this to reshape
image = np.expand_dims(image, axis=0) image = preprocess_input(image)
but get the following error when predicting:
ValueError: Error when checking : expected conv2d_1_input to have shape (None, 1, 28, 28) but got array with shape (1, 3, 28, 28)
As you can see, my CNN uses width = 28, height = 28 and depth =1.

Try using Numpy for reshaping. Since, you have been using a 2D-Convolutional model:
image = np.reshape(image, (28, 1, 28, 1))

The error message shows the network expects the image shape is 1*28*28, but your input is in 3*28*28. I guess the image you input is a color image, 3 channels(RGB), while the network expects a gray image, one channel.
When you call opencv to read image, please use code below.
img = cv2.imread(imgfile, cv2.IMREAD_GRAYSCALE)

simply use
'''image = np.reshape(len(image), (28,28, 1))'''

Related

what is the use of expand_dims in image processing?

I saw a face detection model which consists of the below function. but I could not understand what is the use of the expand_dims function. can anyone explain me what it is and why we are using ?
def get_embedding(model,face_pixels):
face_pixels=face_pixels.astype('float32')
mean, std=face_pixels.mean(),face_pixels.std()
face_pixels=(face_pixels-mean)/std
samples=expand_dims(face_pixels,axis=0)
yhat=model.predict(samples)
return yhat[0]
tf.keras.Conv2D layers expect input with 4D shape:
(n_samples, height, width, channels)
Most libraries that load images will load in 3D like this:
(height, width, channels)
By using np.expand_dims(image, axis=0) or tf.expand_dims(image, axis=0), you add a batch dimension at the beginning, effectively turning your data in the 4D format the Keras needs for Conv2D layers. For instance:
(224, 224, 3)
to:
(1, 224, 224, 3)
If you give Conv2D 3D data, it will give something like this:
ValueError: Error when checking input: expected conv2d_19_input to have 4 dimensions, but got array with shape (60000, 28, 28)

Reshaping 2D Grayscale into 4D for Keras Model Inference

I have a pre-trained Keras model that I need to use to classify a 512x 512 image that is originally in grayscale format. The input to the Keras model should be in the shape (None, 512, 512, 1). .
I executed the following code:
model=load_model('model.h5')
img = Image.open('img.jpg')
img_array = np.array (img)
img_array = img_array/255
model.predict (img_array)
However, I get the following error
Error when checking input: expected input_1 to have 4 dimensions, but
got array with shape (512, 512)
I know that I need to reshape my grayscale image into 4D to match the desired input shape, however, I am not sure how to do this so that the image keeps its original features. How can I make the grayscale image into 4D properly?
Thanks.
try reshaping the array
img_array = img_array.reshape((1, 512, 512, 1))
here 1st and last dimension are batch size and channels respectively

Unexpected shape in numpy array of a pillow image?

I have built a neural network to detect handwritten digits using the MNIST dataset.
The network takes an input shape of (28,28) as the training MNIST images are 28x28 grayscale.
I now want to test my neural network on some of my own handwriting.
The images I have are not 28x28 grayscale images so I am trying to convert them so that my model will accept them to make predictions.
Currently I have the following:
img = Image.open('image.png').convert('LA')
newImg = img.resize((28,28), Image.ANTIALIAS)
toPredict = np.array(new_img)
However this is giving my an numpy array of shape (28, 28, 2) I don't understand this.
After conversion to grayscale and resizing I should have a 28x28 shaped array (28 pixels height multiplied by 28 pixels width).
I don't understand why the shape is not that.
Can anyone help me get the shape to be 28x28 (and explain why it isn't already) so I can pass this to my neural network?
Thank you!
You're almost there.
img = Image.open('image.png').convert('LA') is 28x28x2 because it is greyscale with an alpha channel.
Instead convert it to just greyscale with:
img = Image.open('image.png').convert('L')
You can see more information on the modes here:
https://pillow.readthedocs.io/en/latest/handbook/concepts.html#modes

How to modify the tensor data dimension in pytorch, thanks

I want to transform the tensor data to numpy data and save it through Opencv, But the opencv require the data dimension must like such style [1, something, something, something], but my tensor data is a blend one, it'e size like [30, something, something, something],how can I modify the data dimension in pytorch.
PS, is there any function in pytorch can save data as a binary picture? I use "save_image" command to save my tensor data to a picture with all figure is 1 or 0, but the picture show is still a gray style. Maybe there is any other ways to save tensor data as a binary picture, please tell me.
def save_image_tensor2cv2(input_tensor, filename):
assert (len(input_tensor.shape) == 4) and input_tensor.shape[0] == 1)
input_tensor = input_tensor.clone().detach()
input_tensor = input_tensor.to(torch.device('cpu'))
input_tensor = input_tensor.squeeze()
input_tensor = input_tensor.mul_(255).add_(0.5).clamp_(0, 255).permute(1, 2, 0).type(torch.uint8).numpy()
cv2.imwrite(filename, input_tensor)
batch = next(iter(dataloader_test))
batch.shape
torch.Size([4, 3, 160, 160])
np.transpose(batch.numpy(), (0,2,3,1)).shape
(4, 160, 160, 3)
image = np.transpose(batch.numpy(), (0,2,3,1))
cv2.imwrite("image.png", image[0])
You might have to unNormalize the data before saving it though.

Tensorflow example with own handwritten images

I have tried the tensorflow example with zalando mnist here:
https://www.tensorflow.org/tutorials/keras/basic_classification
After that I changed the clothes images with handwritten mnist database, which also works.
Now I want to train the AI with the mnist handwritten database, take a picture from my handwritten "1" and let the KI guess the number.
I appended after the trainig of the KI some lines of code.
What I tried is this:
ownPicArr = imageio.imread(filename) #it is a 28x28 PNG file
ownPicArr = ownPicArr / 255.0
pred = model.predict(ownPicArr)
I got following error:
ValueError: Error when checking input: expected flatten_input to have 3 dimensions, but got array with shape (28, 28)
How to solve this problem? Thnak you...
Even if the colours of your picture were inverted, this is how you could perform the predictions using OpenCV
import os, cv2
image=cv2.imread(imagePath)
image_from_array = Image.fromarray(image, 'RGB')
size_image = image_from_array.resize((28,28))
p = np.expand_dims(size_image, 0)
img = tf.cast(p, tf.float32)
pred = model.predict(img)
First we read the image using OpenCV, which stores it as an array. We then convert the array and also specify the colour channels. After Resizing the image we create a batch of a single image and then after changing the datatype to float32 to or the datatype matching your model we finally make predictions

Categories

Resources