I have trained my ResNet101V2 model (keras) and have saved the model. On loading the model and trying to classify a new image, I keep getting the error: ValueError: Input 0 is incompatible with layer model_7: expected shape=(None, 255, 255, 3), found shape=(None, 255, 3)
Here's my code:
load_path = 'path to my model'
model = keras.models.load_model(load_path)
image_path = 'path to my image'
img_np = cv2.imread(image_path, cv2.IMREAD_COLOR)
resized_img_np = cv2.resize(img_np, (255, 255))
print(resized_img_np.shape) # <============= PRINTS (255, 255, 3)
prediction = model.predict(resized_img_np) # <========= ERROR
You need to add an extra dimension to match with batch size. Add a dimension using np.expand_dims to the resized image and pass to model for predictionion.
resized_img_np = np.expand_dims(resized_img_np,axis=0)
prediction = model.predict(resized_img_np)
As the model was trained on batches you have to add a batch value of 1 for a single sample,
the error indicated that the size should be:
(None, 255, 255, 3)
Where the None shows the varying batchsize.
You can simply solve this by adding a "1" as the first dimension of your input image, showing that you are going to classify only one image.
Where the shape instead of (255, 255, 3) would be:
import numpy as np
resized_img_np = cv2.resize(np.array(img_np), (255, 255))
resized_img_np = np.expand_dims(resized_img_np, axis=0)
Related
I want to predict from image url. In the past, I use ImageDatagenerator().flow_from_directory() methods, but now I have only one image. so I want to predict from this single image.
I have tried the below code, but failed. (Dimension error)
url = "http://3.36.149.28/uploads/WEBUPLOADprofile.png"
img = Image.open(requests.get(url, stream=True).raw)
img = img_to_array(img)
img = img/255.
#Predict
pred = model.predict(img)
so I tried reshape & retrying, but failed (cannot reshape array of size 1048576 into shape (28,28,1))
img = img.reshape(-1, 28, 28, 1)
img = img/255.
#Predict
pred = model.predict(img)
for getting reshape & get colored predict image, what can I do ? please help..
Additional : I trained srcnn model, and inputs :
inputs = Input((None, None, 3), dtype='float')
I resolved this problem.
First, my url image shape is (None, None, 4), but my trained shape is (None, None, 3).
So I tried another jpg image (None, None, 3) and expand dimension via np,
and result shape = (1, None, None, 3)
image = np.expand_dims(image, axis=0)
model.predict(image)
from link
and now I get predict image successfully.
I am new to VGG19 and image processing in python. I am trying to test my trained VGG19 model for predicting an image. I am getting this error:-
ValueError: Input 0 is incompatible with layer functional_3: expected shape=(None, 224, 224, 3), found shape=(None, 240, 240, 3)
My tensorflow code for predicting is:-
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
import numpy as np
import cv2
import tensorflow as tf
from tensorflow.keras.models import load_model
model = load_model('VGG19.h5')
CATEGORIES = ["Pneumonia", "Non-Pneumonia"]
img = cv2.imread('person1_bacteria_1.jpeg')
img = cv2.resize(img,(240,240)) # resize image to match model's expected sizing
img = np.reshape(img,[1,240,240,3]) # return the image with shaping that TF wants.
prediction = model.predict(img)
prediction
But in the case of .ipynb file I simply get a warning regarding this:-
This the image
You are resizing to wrong shape. Instead of 240,240
img = cv2.resize(img,(240,240)) # resize image to match model's expected sizing
img = img.reshape(1,240,240,3) # return the image with shaping that TF wants.
Use 224,224
img = cv2.resize(img,(224,224)) # resize image to match model's expected sizing
img = img.reshape(1,224,224,3) # return the image with shaping that TF wants.
Your pretrained model is expecting an input of shape (224,224,3) and you are feeding it (240,240,3), hence the complaint.
So just do:
img = img.reshape(1,224,224,3)
And you are good to go!
The example below is extracted from the official TensorFlow tutorial on data pipelines. Basically, one resizes a bunch of JPGs to be (128, 128, 3). For some reason, when applying the map() operation, the colour dimension, namely 3, is turned into a None when examining the shape of the dataset. Why is that third dimension singled out? (I checked to see if there were any images that weren't (128, 128, 3) but didn't fid any.)
If anything, None should only show up for the very first dimension, i.e., that which counts the number of examples, and should not affect the individual dimensions of the examples, since---as nested structures---they're supposed to have the same shape anyway so as to be stored as tf.data.Datasets.
The code in TensorFlow 2.1 is
import pathlib
import tensorflow as tf
# Download the files.
flowers_root = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
flowers_root = pathlib.Path(flowers_root)
# Compile the list of files.
list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*'))
# Reshape the images.
# Reads an image from a file, decodes it into a dense tensor, and resizes it
# to a fixed shape.
def parse_image(filename):
parts = tf.strings.split(file_path, '\\') # Use the forward slash on Linux
label = parts[-2]
image = tf.io.read_file(filename)
image = tf.image.decode_jpeg(image)
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.resize(image, [128, 128])
print("Image shape:", image.shape)
return image, label
print("Map the parse_image() on the first image only:")
file_path = next(iter(list_ds))
image, label = parse_image(file_path)
print("Map the parse_image() on the whole dataset:")
images_ds = list_ds.map(parse_image)
and yields
Map the parse_image() on the first image only:
Image shape: (128, 128, 3)
Map the parse_image() on the whole dataset:
Image shape: (128, 128, None)
Why None in that last line?
From the tutorial you are missing this part
for image, label in images_ds.take(5):
show(image, label)
The line
images_ds = list_ds.map(parse_image)
only creates a placeholder
and there is no image being passed to the function
if you put prints the file_path is blank
But if your use
for image, label in images_ds.take(5)
it iterates over each image passing it through the parse_image function.
When feeding an image to a pretrained InceptionResNetV2 network, I have the following results.
from keras.applications.inception_resnet_v2 import InceptionResNetV2
INPUT_SHAPE = (200, 250, 3)
img = load_img() # loads a 200x250 rgb image into a (200, 250, 3) numpy array
assert img.shape == INPUT_SHAPE # just fine
model = InceptionResNetV2(include_top=False, input_shape=INPUT_SHAPE)
model.predict(img)
ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (200, 150, 3)
I don't understand why and how the model expects a 4 dimension input. What must be done to adapt the (200, 250, 3) image so that it can be processed by the model?
try reshape your input with shapes (1, 200, 150, 3) or (200, 150, 3, 1).
You can use image = np.expand_dims(image, axis=0)) or
image = input_data.reshape((-1, image_side1, image_side2, channels))
You need to feed a batch of images. If your batch has one image, it should also have the same format.
try img.reshape((1, 200, 150, 3))
I am using Transfer learning for recognizing objects. I used trained VGG16 model as the base model and added my classifier on top of it using Keras. I then trained the model on my data, the model works well. I want to see the feature generated by the intermediate layers of the model for the given data. I used the following code for this purpose:
def ModeloutputAtthisLayer(model, layernme, imgnme, width, height):
layer_name = layernme
intermediate_layer_model = Model(inputs=model.input,
outputs=model.get_layer(layer_name).output)
img = image.load_img(imgnme, target_size=(width, height))
imageArray = image.img_to_array(img)
image_batch = np.expand_dims(imageArray, axis=0)
processed_image = preprocess_input(image_batch.copy())
intermediate_output = intermediate_layer_model.predict(processed_image)
print("outshape of ", layernme, "is ", intermediate_output.shape)
In the code, I used np.expand_dims to add one extra dimension for the batch as the input matrix to the network should be of the form (batchsize, height, width, channels). This code works fine. The shape of the feature vector is 1, 224, 224, 64.
Now I wish to display this as image, for this I understand there is an additional dimension added as batch so I should remove it. Following this I used the following lines of the code:
imge = np.squeeze(intermediate_output, axis=0)
plt.imshow(imge)
However it throws an error:
"Invalid dimensions for image data"
I wonder how can I display the extracted feature vector as an image. Any suggestion please.
Your feature shape is (1,224,224,64), you cannot directly plot a 64 channel image. What you can do is plot the individual channels independently like following
imge = np.squeeze(intermediate_output, axis=0)
filters = imge.shape[2]
plt.figure(1, figsize=(32, 32)) # plot image of size (32x32)
n_columns = 8
n_rows = math.ceil(filters / n_columns) + 1
for i in range(filters):
plt.subplot(n_rows, n_columns, i+1)
plt.title('Filter ' + str(i))
plt.imshow(imge[:,:,i], interpolation="nearest", cmap="gray")
This will plot 64 images in 8 rows and 8 columns.
A possible way to go consists in combining the 64 channels into a single-channel image through a weighted sum like this:
weighted_imge = np.sum(imge*weights, axis=-1)
where weights is an array with 64 weighting coefficients.
If you wish to give all the channels the same weight you could simply compute the average:
weighted_imge = np.mean(imge, axis=-1)
Demo
import numpy as np
import matplotlib.pyplot as plt
intermediate_output = np.random.randint(size=(1, 224, 224, 64),
low=0, high=2**8, dtype=np.uint8)
imge = np.squeeze(intermediate_output, axis=0)
weights = np.random.random(size=(imge.shape[-1],))
weighted_imge = np.sum(imge*weights, axis=-1)
plt.imshow(weighted_imge)
plt.colorbar()
In [33]: intermediate_output.shape
Out[33]: (1, 224, 224, 64)
In [34]: imge.shape
Out[34]: (224, 224, 64)
In [35]: weights.shape
Out[35]: (64,)
In [36]: weighted_imge.shape
Out[36]: (224, 224)