Keras predict diabetic retiopathy - python

I'm trying to predict a diabetic retiopathy by using densenet121 model from keras.
I have a 5 folder that contain 0:3647 images 1:750 images 2:1105 images 3:305 images 4:193 images
train data have 6000 image
and validate data have 1000 image and test data have 25 to test a little bit
I use keras imagedatagenerator to preprocess image and augmented it,size of image is (224,224)
def HE(img):
img_eq = exposure.equalize_hist(img)
return img_eq
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=90,
width_shift_range=0,
height_shift_range=0,
shear_range=0,
zoom_range=0,
horizontal_flip=True,
fill_mode='nearest',
preprocessing_function=HE,
)
validation_datagen = ImageDataGenerator(
rescale=1./255
)
test_datagen = ImageDataGenerator(
rescale=1./255
)
train = train_datagen.flow_from_directory(
'train/train_deep/',
target_size=(224,224),
color_mode='grayscale',
class_mode='categorical',
batch_size = 20,
)
test = test_datagen.flow_from_directory(
'test_deep/',
batch_size=1,
target_size = (224,224),
color_mode='grayscale',
)
val = validation_datagen.flow_from_directory(
'train/validate_deep/',
target_size=(224,224),
color_mode='grayscale',
batch_size = 20,
)
I use a densenet121 model from keras to compile
model = DenseNet121(include_top=True, weights=None, input_tensor=None,
input_shape=(224,224,3), pooling=None, classes=5)
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.summary()
filepath="weights-improvement-{epoch:02d}-{val_loss:.2f}.hdf5"
checkpointer = ModelCheckpoint(filepath,monitor='val_loss', verbose=1,
save_best_only=True,save_weights_only=True)
lr_reduction = ReduceLROnPlateau(monitor='val_loss', patience=5, verbose=2,
factor=0.5)
callbacks_list = [checkpointer, lr_reduction]
history = model.fit_generator(
train,
epochs=Epoch,
validation_data=val,
class_weight={0:1, 1:10.57, 2:4.88, 3:29, 4:35},
use_multiprocessing = False,
workers = 16,
callbacks=callbacks_list
)
but when I try to predict
#predict
pred=model.predict_generator(test,
steps=25,)
print(pred)
They predict all are 3
my predict image
problems that I am facing.
1.I try to change a weight of my image because it a imbalance data but, it still doesn't work:
2.Estimate time use 6-7 minutes per epoch that take too much time if I want to train more epoch like 50 epoch what should I do??
Edit
1. I print an array of my 25 predict image and they show
[[0.2718658 0.21595034 0.29440382 0.12089088 0.0968892 ]
[0.2732306 0.22084573 0.29103383 0.11724534 0.0976444 ]
[0.27060518 0.22559224 0.2952135 0.11220136 0.09638774]
[0.27534768 0.21236925 0.28757185 0.12544192 0.09926935]
[0.27870545 0.22124214 0.27978882 0.11854914 0.1017144 ]
[0.2747815 0.22287942 0.28961015 0.11473729 0.09799159]
[0.27190813 0.22454649 0.29327467 0.11331796 0.09695279]
[0.27190694 0.22116153 0.27061856 0.12831333 0.10799967]
[0.27871644 0.21939436 0.28575435 0.11689039 0.09924441]
[0.27156618 0.22850358 0.27458736 0.11895953 0.10638336]
[0.27199408 0.22443996 0.29326025 0.11337796 0.09692782]
[0.27737287 0.22283535 0.28601763 0.11459836 0.09917582]
[0.2719294 0.22462222 0.29477262 0.11228184 0.09639395]
[0.27496076 0.22619417 0.24634513 0.12380602 0.12869397]
[0.27209386 0.23049556 0.27982628 0.11399914 0.10358524]
[0.2763851 0.22362126 0.27667257 0.11974224 0.10357884]
[0.28445077 0.22687359 0.22116113 0.12310001 0.14441448]
[0.27552167 0.22341767 0.28794768 0.11433118 0.09878179]
[0.27714184 0.22157396 0.26033664 0.12819317 0.11275442]
[0.27115697 0.22615613 0.29698634 0.10981857 0.09588206]
[0.27108756 0.22484282 0.29557163 0.11230227 0.09619577]
[0.2713721 0.22606659 0.29634616 0.11017173 0.09604342]
[0.27368984 0.22699612 0.28083235 0.11586079 0.10262085]
[0.2698808 0.22924589 0.29770645 0.10761821 0.0955487 ]
[0.27016872 0.23090932 0.2694938 0.11959692 0.1098313 ]]
I see some image are in 0 but they show 3 in all prediction,So why it show that?
2. I change some line of code a little bit in model densenet121 , I remove a external top layer and change a predict code for more easy to see.

Related

How to load validation data with flow_from_directory function in tensorflow

[Resolved]
I have an issue when using flow_from_directory function in tensorflow module tensorflow.keras.preprocessing.image
I can load all my data for training my model, but I can't load my data to validate the training...
My code (edit) [WORKING !] :
# Import
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, models
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_data_dir = '../data/train'
validation_data_dir = '../data/validation'
# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
#validation_split=0.20
)
# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1./255)
def create_dataset(train_data_dir, validation_data_dir, img_height, img_width, batch_size):
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
shuffle=True,
class_mode='categorical',
subset = 'training',
)
validation_generator = train_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
shuffle=True,
class_mode='categorical',
subset = 'validation',
)
return train_generator, validation_generator
My dataset is in a function that I call on my CNN model and when I run it, it always return this :
Found 0 images belonging to 6 classes.
Epoch 1/5Found 5732 images belonging to 6 classes.
Found 0 images belonging to 6 classes.
Epoch 1/5
EDIT :
This is the part in my model.py file where I call the function create_dataset :
image_height = 100
image_width = 100
batch_size = 32
ds_train, ds_validation = create_dataset("../data/train", "../data/validation", image_height, image_width, batch_size)
C
N
N
...
model.fit(ds_train, epochs=5, verbose=2)
model.evaluate(ds_validation, verbose=2)
model.save('../complete_saved_model/')
Can someone help me to resolve this ?
in your train_generator remove the code subset = 'training'. Likewise in your
validation_generator remove the code subset = 'validation'.

model.predict() function always returns 1 for binary image classification model

I was working on a binary image classification deep learning model using transfer learning in Google colab.
!wget https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 -O /tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
local_weights_file = '/tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5'
pre_trained_model = InceptionV3(input_shape = (300, 300, 3),
include_top = False,
weights = None)
pre_trained_model.load_weights(local_weights_file)
for layer in pre_trained_model.layers:
layer.trainable = False
last_layer = pre_trained_model.get_layer('mixed7')
last_output = last_layer.output
x = layers.Flatten()(last_output)
x = layers.Dense(512, activation='relu')(x)
x = layers.Dropout(0.2)(x)
x = layers.Dense(1, activation='sigmoid')(x)
model = Model(pre_trained_model.input, x)
from tensorflow.keras.optimizers import RMSprop
model.compile(optimizer=RMSprop(lr=0.0001),
loss='binary_crossentropy',
metrics=['accuracy'])
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale=1/255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode="nearest")
validation_datagen = ImageDataGenerator(rescale=1/255)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(300, 300),
batch_size=100,
class_mode='binary')
validation_generator = validation_datagen.flow_from_directory(
validation_dir,
target_size=(300, 300),
batch_size=100,
class_mode='binary')
history = model.fit(
train_generator,
steps_per_epoch=20,
epochs=30,
verbose=1,
validation_data=validation_generator,
validation_steps=10,
callbacks=[callbacks])
import numpy as np
from google.colab import files
from tensorflow.keras.preprocessing import image
uploaded=files.upload()
for fn in uploaded.keys():
path='/content/' + fn
img=image.load_img(path, target_size=(300, 300))
x=image.img_to_array(img)
x=np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict(images, batch_size=10)
print(classes)
Even though after training the model and obtaining a quite good accuracy on training and validation data, the model is always predicting 1 for any new image. I have tried changing the batch size, epochs, learning rate, etc. But, no luck.
Can anyone explain what's the problem here?

Transfer learning on keras model always gives same predictions

I'm trying to train an image classifier using keras applications module. When I run predictions on validation set, all images are predicted as the same class. It is not always the same class, it varies during training. I'm using MobileNetV2 with weights from ImageNet but I also tried other models with same result.
I've tried using model from TensorFlow hub like described in this tutorial: https://www.tensorflow.org/beta/tutorials/images/hub_with_keras and it worked fine, so it is not a data set issue.
My code snippet:
image_size = 224
batch_size = 32
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
preprocessing_function=tf.keras.applications.mobilenet_v2.preprocess_input)
validation_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
preprocessing_function=tf.keras.applications.mobilenet_v2.preprocess_input)
train_generator = train_datagen.flow_from_directory(training_data_dir,
target_size=(image_size, image_size),
batch_size=batch_size)
validation_generator = train_datagen.flow_from_directory(validation_data_dir,
target_size=(image_size, image_size),
batch_size=batch_size)
IMG_SHAPE = (image_size, image_size, 3)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights="imagenet")
base_model.trainable = False
model = tf.keras.Sequential([
base_model,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(train_generator.num_classes, activation='softmax')
])
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=0.001),
loss="categorical_crossentropy",
metrics=["accuracy"])
model.summary()
batch_stats = CollectBatchStats()
epoch_stats = CollectEpochStats(model, validation_generator)
checkpoint = tf.keras.callbacks.ModelCheckpoint(...)
epochs = 10
steps_per_epoch = train_generator.n // train_generator.batch_size
validation_steps = validation_generator.n // validation_generator.batch_size
history = model.fit_generator(train_generator,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
callbacks=[batch_stats, epoch_stats, checkpoint],
workers=4,
validation_data=validation_generator,
validation_steps=validation_steps)
Issue resolved: in my code I had following lines after model compilation:
sess = keras_backend.get_session()
init = tf.compat.v1.global_variables_initializer()
sess.run(init)
After removing them everything works fine.

Why do my models keep getting exactly 0.5 AUC?

I am currently doing a project in which I need to predict eye disease in a group of images. I am using the Keras built-in applications. I am getting good results on VGG16 and VGG19, but on the Xception architecture I keep getting AUC of exactly 0.5 every epoch.
I have tried different optimizers and learning rates, but nothing works. I solved the same problem with VGG19 by switching from RMSProp optimizer to Adam optimizer, but I can't get it to work for Xception.
def buildModel():
from keras.models import Model
from keras.layers import Dense, Flatten
from keras.optimizers import adam
input_model = applications.xception.Xception(
include_top=False,
weights='imagenet',
input_tensor=None,
input_shape=input_sizes["xception"],
pooling=None,
classes=2)
base_model = input_model
x = base_model.output
x = Flatten()(x)
predictions = Dense(2, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers:
layer.trainable = False
model.compile(optimizer=adam(lr=0.01), loss='binary_crossentropy', metrics=['accuracy'])
return model
class Histories(keras.callbacks.Callback):
def __init__(self, val_data):
super(Histories, self).__init__()
self.x_batch = []
self.y_batch = []
for i in range(len(val_data)):
x, y = val_data.__getitem__(i)
self.x_batch.extend(x)
self.y_batch.extend(np.ndarray.astype(y, int))
self.aucs = []
self.specificity = []
self.sensitivity = []
self.losses = []
return
def on_train_begin(self, logs={}):
initFile("results/xception_results_adam_3.txt")
return
def on_train_end(self, logs={}):
return
def on_epoch_begin(self, epoch, logs={}):
return
def on_epoch_end(self, epoch, logs={}):
self.losses.append(logs.get('loss'))
y_pred = self.model.predict(np.asarray(self.x_batch))
con_mat = confusion_matrix(np.asarray(self.y_batch).argmax(axis=-1), y_pred.argmax(axis=-1))
tn, fp, fn, tp = con_mat.ravel()
sens = tp/(tp+fn)
spec = tn/(tn+fp)
auc_score = roc_auc_score(np.asarray(self.y_batch).argmax(axis=-1), y_pred.argmax(axis=-1))
print("Specificity: %f Sensitivity: %f AUC: %f"%(spec, sens, auc_score))
print(con_mat)
self.sensitivity.append(sens)
self.specificity.append(spec)
self.aucs.append(auc_score)
writeToFile("results/xception_results_adam_3.txt", epoch, auc_score, spec, sens, self.losses[epoch])
return
# What follows is data from the Jupyter Notebook that I actually use to evaluate
#%% Initialize data
trainDirectory = 'RetinaMasks/train'
valDirectory = 'RetinaMasks/val'
testDirectory = 'RetinaMasks/test'
train_datagen = ImageDataGenerator(rescale=1. / 255)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
trainDirectory,
target_size=(299, 299),
batch_size=16,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
valDirectory,
target_size=(299, 299),
batch_size=16,
class_mode='categorical')
test_generator = test_datagen.flow_from_directory(
testDirectory,
target_size=(299, 299),
batch_size=16,
class_mode='categorical')
#%% Create model
model = buildModel("xception")
#%% Initialize metrics
from keras.callbacks import EarlyStopping
from MetricsCallback import Histories
import keras
metrics = Histories(validation_generator)
es = EarlyStopping(monitor='val_loss',
min_delta=0,
patience=20,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=False)
mcp = keras.callbacks.ModelCheckpoint("saved_models/xception.adam.lr0.1_{epoch:02d}.hdf5",
monitor='val_loss',
verbose=0,
save_best_only=False,
save_weights_only=False,
mode='auto',
period=1)
#%% Train model
from StaticDataAugmenter import superDirectorySize
history = model.fit_generator(
train_generator,
steps_per_epoch=superDirectorySize(trainDirectory) // 16,
epochs=100,
validation_data=validation_generator,
validation_steps=superDirectorySize(valDirectory) // 16,
callbacks=[metrics, es, mcp],
workers=8,
shuffle=False
)
I honestly have no idea what causes this behavior, or how to prevent it. Thank you in advance, and I apologize for the long code snippet :)
Your learning rate is too high.
Try lowering the learning rate.
I used to run into this when using transfer learning, I was fine-tuning at very high learning rates.
An extended AUC of 0.5 over multiple epochs in case of a binary classification means that your (convolutional) neural network is not able to distinguish between the classes at all. This is in turn because it's not able to learn anything.
Use learning_rates of 0.0001,0.00001,0.000001.
At the same time, you should try to unfreeze/make some layers trainable, due to the fact that you entire feature extractor is frozen; in fact this could be another reason why the network is incapable of learning anything.
I am quite confident that your problem will be solved if you lower your learning rate :).
An AUC of 0.5 implies that your network is randomly guessing the output, which means it didn't learn anything. This was already disscued for example here.
As Timbus Calin suggested, you could do a "line search" of the learning rate starting with 0.000001 and then increase the learning rate by potencies of 10.
I would suggest you directly start with a random search, where you not only try to optimize the learning rate, but also other hyperparameters like for example the batch size. Read more about random search in this paper.
You are not computing the AUC correctly, you currently have this:
auc_score = roc_auc_score(np.asarray(self.y_batch).argmax(axis=-1), y_pred.argmax(axis=-1))
AUC is computed from (probability) scores produced by the model. The argmax of the model output does not provide scores, but class labels. The correct function call is:
auc_score = roc_auc_score(np.asarray(self.y_batch).argmax(axis=-1), y_pred[:, 1])
Note that the score needed to compute ROC is the probability of the positive class, which is the second element of the softmax output. This is why only the second column of the predictions is used to make the AUC.
What about this?
def buildModel():
from keras.models import Model
from keras.layers import Dense, Flatten
from keras.optimizers import adam
input_model = applications.xception.Xception(
include_top=False,
weights='imagenet',
input_tensor=None,
input_shape=input_sizes["xception"],
pooling='avg', # 1
classes=2)
base_model = input_model
x = base_model.output
# x = Flatten()(x) # 2
predictions = Dense(2, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers:
layer.trainable = False
model.compile(optimizer=adam(lr=0.01),
loss='categorical_crossentropy', # 3
metrics=['accuracy'])
return model
class Histories(keras.callbacks.Callback):
def __init__(self, val_data):
super(Histories, self).__init__()
self.x_batch = []
self.y_batch = []
for i in range(len(val_data)):
x, y = val_data.__getitem__(i)
self.x_batch.extend(x)
self.y_batch.extend(np.ndarray.astype(y, int))
self.aucs = []
self.specificity = []
self.sensitivity = []
self.losses = []
return
def on_train_begin(self, logs={}):
initFile("results/xception_results_adam_3.txt")
return
def on_train_end(self, logs={}):
return
def on_epoch_begin(self, epoch, logs={}):
return
def on_epoch_end(self, epoch, logs={}):
self.losses.append(logs.get('loss'))
y_pred = self.model.predict(np.asarray(self.x_batch))
con_mat = confusion_matrix(np.asarray(self.y_batch).argmax(axis=-1), y_pred.argmax(axis=-1))
tn, fp, fn, tp = con_mat.ravel()
sens = tp/(tp+fn)
spec = tn/(tn+fp)
auc_score = roc_auc_score(np.asarray(self.y_batch).argmax(axis=-1), y_pred.argmax(axis=-1))
print("Specificity: %f Sensitivity: %f AUC: %f"%(spec, sens, auc_score))
print(con_mat)
self.sensitivity.append(sens)
self.specificity.append(spec)
self.aucs.append(auc_score)
writeToFile("results/xception_results_adam_3.txt", epoch, auc_score, spec, sens, self.losses[epoch])
return
# What follows is data from the Jupyter Notebook that I actually use to evaluate
#%% Initialize data
trainDirectory = 'RetinaMasks/train'
valDirectory = 'RetinaMasks/val'
testDirectory = 'RetinaMasks/test'
train_datagen = ImageDataGenerator(rescale=1. / 255)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
trainDirectory,
target_size=(299, 299),
batch_size=16,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
valDirectory,
target_size=(299, 299),
batch_size=16,
class_mode='categorical')
test_generator = test_datagen.flow_from_directory(
testDirectory,
target_size=(299, 299),
batch_size=16,
class_mode='categorical')
#%% Create model
model = buildModel("xception")
#%% Initialize metrics
from keras.callbacks import EarlyStopping
from MetricsCallback import Histories
import keras
metrics = Histories(validation_generator)
es = EarlyStopping(monitor='val_loss',
min_delta=0,
patience=20,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=False)
mcp = keras.callbacks.ModelCheckpoint("saved_models/xception.adam.lr0.1_{epoch:02d}.hdf5",
monitor='val_loss',
verbose=0,
save_best_only=False,
save_weights_only=False,
mode='auto',
period=1)
#%% Load saved model
from keras.models import load_model
# model = load_model("saved_models/vgg16.10.hdf5") # 4
#%% Train model
from StaticDataAugmenter import superDirectorySize
history = model.fit_generator(
train_generator,
steps_per_epoch=superDirectorySize(trainDirectory) // 16,
epochs=100,
validation_data=validation_generator,
validation_steps=superDirectorySize(valDirectory) // 16,
callbacks=[metrics, es, mcp],
workers=8,
shuffle=False
)
For 1 and 2,I think it doesn't make sense to use FC layer right after ReLU without use a pooling layer, never try it so it might not help anything.
For 3, why are you using BCE when your generators are using class_mode='categorical'?
For 4, as I comment above, this mean you are loading your VGG model and train it, instead of using the Xception from buildModel().

Keras model.predict always 0

I am using keras applications for transfer learning with resnet 50 and inception v3 but when predicting always get [[ 0.]]
The below code is for a binary classification problem. I have also tried vgg19 and vgg16 but they work fine, its just resnet and inception. The dataset is a 50/50 split. And I am only changing the model = applications.resnet50.ResNet50 line of code for each model.
below is the code:
from keras.callbacks import EarlyStopping
early_stopping = EarlyStopping(monitor='val_loss', patience=2)
img_width, img_height = 256, 256
train_data_dir = xxx
validation_data_dir = xxx
nb_train_samples = 14000
nb_validation_samples = 6000
batch_size = 16
epochs = 50
if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height, 3)
model = applications.resnet50.ResNet50(weights = "imagenet", include_top=False, input_shape = (img_width, img_height, 3))
from keras.callbacks import EarlyStopping
early_stopping = EarlyStopping(monitor='val_loss', patience=2)
img_width, img_height = 256, 256
train_data_dir = xxx
validation_data_dir = xxx
nb_train_samples = 14000
nb_validation_samples = 6000
batch_size = 16
epochs = 50
if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height, 3)
model = applications.resnet50.ResNet50(weights = "imagenet", include_top=False, input_shape = (img_width, img_height, 3))
#Freeze the layers which you don't want to train. Here I am freezing the first 5 layers.
for layer in model.layers[:5]:
layer.trainable = False
#Adding custom Layers
x = model.output
x = Flatten()(x)
x = Dense(1024, activation="relu")(x)
x = Dropout(0.5)(x)
#x = Dense(1024, activation="relu")(x)
predictions = Dense(1, activation="sigmoid")(x)
# creating the final model
model_final = Model(input = model.input, output = predictions)
# compile the model
model_final.compile(loss = "binary_crossentropy", optimizer = optimizers.SGD(lr=0.0001, momentum=0.9), metrics=["accuracy"])
# Initiate the train and test generators with data Augumentation
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
# Save the model according to the conditions
#checkpoint = ModelCheckpoint("vgg16_1.h5", monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', period=1)
#early = EarlyStopping(monitor='val_acc', min_delta=0, patience=10, verbose=1, mode='auto')
model_final.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size,
callbacks=[early_stopping])
from keras.models import load_model
import numpy as np
from keras.preprocessing.image import img_to_array, load_img
#test_model = load_model('vgg16_1.h5')
img = load_img('testn7.jpg',False,target_size=(img_width,img_height))
x = img_to_array(img)
x = np.expand_dims(x, axis=0)
#preds = model_final.predict_classes(x)
prob = model_final.predict(x, verbose=0)
#print(preds)
print(prob)
Note That model_final.evaluate_generator(validation_generator, nb_validation_samples) provides an expected accuracy like 80% its just predict that is always 0.
Just find it strange that vgg19 and vgg16 work fine but not resnet50 and inception. Do these models require something else to work?
Any insight would be great.
Thanks in advance.
I was running into similar problem. You are scaling all the RGB values from 0-255 to 0-1 during training.
Thse same should be done at the time of prediction.
Try
x = img_to_array(img)
x = x/255

Categories

Resources