Keras model won't use predict correctly - python

I have a Keras model that I have trained and evaluated and even tested. Now I am trying to use three test images into the model.
I run the images through a preprocessor which is the same one I used to make the training data. I then do the exact same thing to the single images that I did for the testing data. But it gives me an error of
Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 2 arrays:
I don't know what is wrong with it.
So this is how I test the model successfully.
y_pred = []
y_true = []
for i in range(0, len(test_x1)):
x1 = test_x1[i]
x2 = test_x2[i]
x1 = np.expand_dims(x1, axis=0)
x2 = np.expand_dims(x2, axis=0)
y_true.append(np.argmax(test_y[i]))
pred = model.predict([x1, x2])
y_pred.append(make_binary(pred))
This is the preprocessing method I used for both images
def create_features(file, image_dir, base_model):
img_path = os.path.join(image_dir, file)
img = image.load_img(img_path, target_size=(224, 224))
img = image.img_to_array(img)
x = resnet50.preprocess_input(img)
x = np.array([x])
feature = base_model.predict(x)
return feature
And this is the way I am processing the new images:
IMAGE_DIR = 'Data'
img1 = 'test1.jpg'
img2 = 'test2.jpg'
img3 = 'test3.jpg'
img1_feat = create_features(img1, IMAGE_DIR, model)
img2_feat = create_features(img2, IMAGE_DIR, model)
img3_feat = create_features(img3, IMAGE_DIR, model)
Now when I look at the two features they are the same.
x1 = test_x1[0]
x1 = np.expand_dims(x1, axis=0)
print(x1.shape)
print(type(x1))
print(img1_feat.shape)
print(type(img1_feat))
(1, 1, 1000)
<class 'numpy.ndarray'>
(1, 1, 1000)
<class 'numpy.ndarray'>
And then I try to make a prediction from it
pred1 = model.predict([img1_feat, img2_feat])
But that results in an error.

I figured out what was wrong thanks to #Matias Valdenegro and #Mukul
I was doing this in an Ipython notebook and after a few epochs go around I found out that on occasion the model gets overwritten by an imported resent model from another class.
Thanks to everyone for the help. I didnt think about using the model.summary() as I didnt really think that it has changed.

Related

Predict Image class after One shot model Training

I am making image search using the one-shot model because I have very few data for per class.
I am following this tutorial
Already prepared the datapipeline and trained the model. But I didn't understand the single image prediction process which we do which we do generally by model.predict.
I tried the following code but I think I am missing something.
img1 = cv2.imread("./images_evaluation/test.jpg",cv2.IMREAD_GRAYSCALE)
img1 = cv2.resize(img1,(105,105))
img1 = np.expand_dims(cv2.resize(img1, (105,105)), axis=2)
(test_image_names, train_image_names) = generate_oneshot_validation_trials(dataset, 20)
train_images = get_images(train_image_names, IMAGE_SHAPE)
images = np.tile(img1, (len(train_images), 1, 1, 1))
preds = siamese_model1.predict([images, train_images])
pred_idx = np.argmax(preds, axis=0)[0]
pred_char_name = train_image_names[pred_idx].split('/')[-2]
print(pred_char_name) ## here, finding different prediction after every try. whats the reason?

How to test one single image in pytorch

I created my model in pytorch and is working really good, but when i want to test just one image batch_size=1 always return the second class (in this case a dog).
I tried to test with batch > 1 and in all cases this works!
The architecture:
model = models.densenet121(pretrained=True)
for param in model.parameters():
param.requires_grad = False
from collections import OrderedDict
classifier = nn.Sequential(OrderedDict([
('fc1', nn.Linear(1024, 500)),
('relu', nn.ReLU()),
('fc2', nn.Linear(500, 2)),
('output', nn.LogSoftmax(dim=1))
]))
model.classifier = classifier
so my tensors are [batch, 3, 224, 224]
i have tried with:
resize
reshape
unsqueeze(0)
the response when is one image is always [[0.4741, 0.5259]]
My Test Code
from PIL import *
msize = 256
loader = transforms.Compose([transforms.Scale(imsize), transforms.ToTensor()])
def image_loader(image_name):
"""load image, returns cuda tensor"""
image = Image.open(image_name)
image = loader(image).float()
image = image.unsqueeze(0)
return image.cuda()
image = image_loader('Cat_Dog_data/test/cat/cat.16.jpg')
with torch.no_grad():
logits = model.forward(image)
ps = torch.exp(logits)
_, predTest = torch.max(ps,1)
print(ps) ## same value in all cases
imagen_mostrar = images[ii].to('cpu')
helper.imshow(imagen_mostrar,title=clas_perro_gato(predTest), normalize=True)
Second Test Code
andrea_data = datasets.ImageFolder(data_dir + '/andrea', transform=test_transforms)
andrealoader = torch.utils.data.DataLoader(andrea_data, batch_size=1, shuffle=True)
dataiter = iter(andrealoader)
images, labels = dataiter.next()
images, labels = images.to(device), labels.to(device)
ps = torch.exp(model.forward(images))
_, predTest = torch.max(ps,1)
print(ps.float())
if i changed my batch_size to 1 always returned a tensor who say that is a dog [0.43,0.57] for example.
Thanks!
I realized that my model wasn't in eval mode.
So i just added model.eval() and now that's all, works for any size batch
You can use this code for test single image for your model train:
import torchvision.transforms as transforms
import torchvision.datasets as datasets
from torch.utils.data import DataLoader,Dataset
from PIL import Image
import matplotlib.pyplot as plt
import numpy as np
def pre_image(image_path,model):
img = Image.open(image_path)
mean = [0.485, 0.456, 0.406]
std = [0.229, 0.224, 0.225]
transform_norm = transforms.Compose([transforms.ToTensor(),
transforms.Resize((224,224)),transforms.Normalize(mean, std)])
# get normalized image
img_normalized = transform_norm(img).float()
img_normalized = img_normalized.unsqueeze_(0)
# input = Variable(image_tensor)
img_normalized = img_normalized.to(device)
# print(img_normalized.shape)
with torch.no_grad():
model.eval()
output =model(img_normalized)
# print(output)
index = output.data.cpu().numpy().argmax()
classes = train_ds.classes
class_name = classes[index]
return class_name
example:
predict_class = pre_image("C:/Users/Salio/Desktop/example.jpeg",your_model)
print(predict_class)
If your model is "correct" it just predicts a dog, you can get the label with torch.argmax(output, dim=1) no matter the size of batch.
Anyway, you shouldn't use LogSoftmax as activation, please use torch.nn.BCEWithLogitsLoss as your loss function and remove activation from your final layer and output only one neuron (probability of the image being a dog only). It would look like this in your case:
classifier = nn.Sequential(
OrderedDict(
[
("fc1", nn.Linear(1024, 500)),
("relu", nn.ReLU()),
("fc2", nn.Linear(500, 1)),
# See? No activation needed
]
)
)
You can the correct label with the above network simply by running output > 0 + you get numerical stability "for free".

How to seperate a Tensorflow dataset object in features and labels

My goal is it to feed a Keras model of an Autencoder only the (batches of) features from a tf.data.Dataset object.
Im loading the Dataset, format the Images and creating Batches like this:
#load dataset
(raw_train, raw_validation, raw_test), metadata = tfds.load(
'cats_vs_dogs',
split=[
tfds.Split.TRAIN.subsplit(tfds.percent[:80]),
tfds.Split.TRAIN.subsplit(tfds.percent[80:90]),
tfds.Split.TRAIN.subsplit(tfds.percent[90:])],
with_info=True,
as_supervised=True,
)
#normalize and resize images
IMG_SIZE = 160
def format_example(self, image, label):
image = tf.cast(image, tf.float32)
image = (image/255.0)
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image, label
train = raw_train.map(format_example)
validation = raw_validation.map(format_example)
test = raw_test.map(format_example)
#create batches
SHUFFLE_BUFFER_SIZE = 1000
BATCH_SIZE = 32
train_batches = train.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
validation_batches = validation.batch(BATCH_SIZE)
test_batches = test.batch(BATCH_SIZE)
And at this point i would like to seperate the batches in features and labels, something like this:
train_x_batches, train_y_batches = train_batches
But i get this error:
`ValueError Traceback (most recent call last)
in
----> 1 train_x_batches, train_y_batches = train_batches
ValueError: too many values to unpack (expected 2)`
I get the same problem and I solved it like this:
train_x_batches = np.concatenate([x for x, y in train_batches], axis=0)
train_y_batches = np.concatenate([y for x, y in train_batches], axis=0)
And you can go back to your classes label using:
train_batches.class_names
If you need only features for your autoencoder, you can slice them via map:
train_x_batches = train_batches.map(lambda x: x[0])
Of course, you can do the same thing for your labels:
train_y_batches = train_batches.map(lambda x: x[1])

Compare predictet image class to actual image class with keras

I am training a keras model to recognise images of cats, dogs and horses.
So far, I have one-hot-encoded my data (since this is a multi-class classification problem), trained my model and called the predictions.
def read_and_process_images(list_of_images):
X = [] #images
y = [] #labels
for image in list_of_images:
try:
X.append(cv2.resize(cv2.imread(image, cv2.IMREAD_COLOR),(nrows, ncolumns), interpolation = cv2.INTER_CUBIC))
if 'dog' in image:
y.append(0)
elif 'cat' in image:
y.append(1)
elif 'horse' in image:
y.append(2)
except Exception as e:
print(str(e))
return X, y
...
X_test, y_test = read_and_process_images(test_imgs)
x = np.array(X_test)
test_datagen = ImageDataGenerator(rescale = 1./255)
i = 0
text_labels = []
plt.figure(figsize = (30,20))
for batch in test_datagen.flow(x, batch_size = 1):
pred = model.predict(batch)
print(np.argmax(pred))
if np.argmax(pred) == 0 :
text_labels.append('dog')
elif np.argmax(pred) == 1:
text_labels.append('cat')
else:
text_labels.append('horse')
plt.subplot(5 / columns + 1, columns, i+1)
plt.title('I think this is a ' + text_labels[i])
imgplot = plt.imshow(batch[0])
i += 1
if i % 10 == 0:
break
plt.show()
The model seems to be working very well. I usually get between 7-10 correct predictions, depending on the batch size. However, I do not understand how model.predict chooses the batches, and I am therefore unable to compare the actual value to the predicted value. When I try to do the following:
y_pred = model.predict(x, batch_size=1)
matrix = confusion_matrix(y_test, y_pred.argmax(axis=1))
the confusion matrix that I get is completely nonsensical (for example it tells me that it only got one cat correct, but I can clearly see with some batches that it got many more correct). Could someone explain to me, how the .predict function goes about choosing its batches, and how I can successfully compare the predicted values to the actual test values? Thank you in advance.

Tensorflow/Keras multi-label classifier

I have just started developing some simple classifier in Tenosrflow and I've started using this example on Tensorflow site: https://www.tensorflow.org/tutorials/keras/basic_classification
Now I want my model to get images like this as features:
These images should have, as corresponding labels, three arrays: [1,0], [3,0] and [1,3].
My problem is: how can I load into the model these kind of labels (i.e. labels that are arrays and not a single scalar)?
When I try as in the example down here, the only thing I got is an error message that I won't report here because they are generated from my lack of knowledge on the thing that I'm trying to do.
Additional question: how should the last neural layer be? How many neurons should it have?
Here is the code:
import tensorflow as tf
from tensorflow import keras
import skimage
from skimage.color import rgb2gray
import csv
import numpy as np
names = ['Cerchio', 'Quadrato', 'Stella']
images = []
labels = [[]]
test_images = []
test_labels = [[]]
final_images = []
for i in range(1, 501):
images.append(skimage.data.imread("{0}.bmp".format(i)))
for i in range(501, 601):
test_images.append(skimage.data.imread("{0}.bmp".format(i)))
for i in range(601, 701):
final_images.append(skimage.data.imread("{0}.bmp".format(i)))
file = open("labels.csv", "rU")
reader = csv.reader(file, delimiter=",")
for row in reader:
for i in range(0, 499):
if int(row[i]) < 10:
labels.append([int(int(row[i])/10), 0])
else:
labels.append([int(int(row[i])/10), int(row[i])%10])
for i in range(500, 600):
if int(row[i]) < 10:
test_labels.append([int(int(row[i])/10), 0])
else:
test_labels.append([int(int(row[i])/10), int(row[i])%10])
file.close()
images28 = np.array(images)
images28 = rgb2gray(images28)
test_images28 = np.array(test_images)
test_images28 = rgb2gray(test_images28)
final_images28 = np.array(final_images)
final_images28 = rgb2gray(final_images28)
labels = np.array(labels)
test_labels = np.array(test_labels)
print(labels)
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 56)),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(4, activation=tf.nn.softmax)
])
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(images28, labels, epochs=5)
test_loss, test_acc = model.evaluate(test_images28, test_labels)
print('Test accuracy:', test_acc)
a = input()
img = final_images28[int(a)]
print(img.shape)
img = (np.expand_dims(img, 0))
print(img.shape)
predictions_single = model.predict(img)
print(predictions_single)
print(names[np.argmax(predictions_single)])
One way is just map the array labels into an index, like [[0,0],[0,0],[0,0]]->0, [[1,0],[0,0],[0,0]]->1,... etc. You'll have 3^6=729 possible labels. If these forms on the images are standard you probably can use just simplest classificator with no hidden layers so it's gonna be dim1xdim2x729 trainable weights. If they are not standard you will be better off using convolutional layers.
You can probably also use fully convolutional model for this problem that is returning 3 dimensional tensor as an output. In this case you can use multidimensional labels. But then you'll have to write custom loss function for it.
After Googling around and toying with my program, I found the solution: a multi-hot encoded array.
In this array, if I have a position for a circle, a square, a star and the blank space (hence a 4 position array), I can feed to my model labels that have a '1' in each corresponding space.
E.g. (referring to the example above):
[1, 0, 1, 0]
[1, 0, 0, 1]
[0, 0, 1, 1]
This did work perfectly.

Categories

Resources