Related
I have the following code that produces my horrible accuracy dilema, has anyone else encountered this issue for multi classification task(49 different images to classify)?
I am running resnet50 on top of my CNN model with softmax as last activation FN, my loss is categorical_crossentropy and my optimizer is Adam.
What might I be doing wrong?
## Build CNN architecture
model1 = Sequential()
model1.add(Conv2D(32, (3,3), strides=1, input_shape = (720, 720, 3)))
model1.add(Activation('relu'))
model1.add(Conv2D(32, (3,3), strides=1, padding="same"))
model1.add(Activation('relu'))
model1.add(MaxPooling2D(pool_size=(2,2)))
model1.add(Conv2D(64, (3,3), strides=1, padding="same"))
model1.add(Activation('relu'))
model1.add(Conv2D(64, (3,3), strides=1, padding="same"))
model1.add(Activation('relu'))
model1.add(MaxPooling2D(pool_size=(2,2)))
model1.add(Flatten())
model1.add(Dense(200))
model1.add(Activation('relu'))
model1.add(Dense(200))
model1.add(Dropout(0.24))
model1.add(Activation('relu'))
model1.add(Dense(49, activation='softmax'))
model1.summary()
# Image data generator for on the fly image augmentation
directory = '/home/carlini-TF2/data/train/'
batch_size = 64
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=90.,
shear_range=0.2,
zoom_range=[0.8,1.2],
horizontal_flip=True,
validation_split=0.2,
preprocessing_function=tf.keras.applications.resnet50.preprocess_input)
train_generator = train_datagen.flow_from_directory(directory=directory,
subset='training',
target_size=(720, 720),
shuffle=True,
seed=42,
color_mode='rgb',
class_mode='categorical',
batch_size=batch_size)
valid_directory = '/home/carlini-TF2/data/test/'
valid_generator = train_datagen.flow_from_directory(directory=valid_directory,
target_size=(720, 720),
color_mode="rgb",
batch_size=batch_size,
class_mode="categorical",
subset='validation',
shuffle=True,
seed=42)
## Compile and train Neural Network
METRICS = [
tf.keras.metrics.Accuracy(name='accuracy'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall')]
# optimal optimizer FN | loss FN to work with accuracy metric
model1.compile(loss=tf.keras.losses.CategoricalCrossentropy(from_logits=False),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
metrics=METRICS)
# stop training when loss gets worse after consecutive epochs
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)
# fit model with augmented training set and validation set | shuffle batch
history = model1.fit(train_generator,
validation_data = valid_generator,
steps_per_epoch = train_generator.n//batch_size,
validation_steps = valid_generator.n//batch_size,
shuffle=True, callbacks = [callback],
epochs=50)
The issue is that ResNet50 was being used for data augmentation and not in the CNN architecture. In order to reach somewhat robust model the following code is needed.
We can throw out the previous architecture and use a very simple model and the ResNet50 since this gives conclusive results.
We must use Functional API since ResNet50 was built on it
data_bias = np.log(1802./4657)
initializer = tf.keras.initializers.Constant(data_bias)
resnet50_imagenet_model = tf.keras.applications.ResNet50(weights='imagenet', include_top=False, input_shape=(720,720,3) )
resnet50_imagenet_model.trainable = False
#Flatten output layer of Resnet
flattened = tf.keras.layers.Flatten()(resnet50_imagenet_model.output)
#Fully connected layer, output layer with 49 diff labels
fc2 = tf.keras.layers.Dense(49, activation='softmax', bias_initializer=initializer, name="AddedDense2")(flattened)
model1 = tf.keras.models.Model(inputs=resnet50_imagenet_model.input, outputs=fc2)
I am trying to train a model using keras, for multiclass classification. There are 5 classes from which to predict. This is an image classification problem, as mentioned before there are five classes of images, bedroom, bathroom, living room, dining room, and kitchen. The problem is the model doesn't seem to learn, it's always stuck at 20% accuracy and the loss never changes from epoch 1. I'm using the convolutional base from the Xception model with my classifier on top. The train, test, and validation datasets are set up using the tf.data API.
Can someone please point out what I am doing wrong?
This is the dataset generation
train_dir = "House_Dataset/Train"
valid_dir = "House_Dataset/Valid"
test_dir = "House_Dataset/Test"
train_ds = trainAug.flow_from_directory(
train_dir,
target_size=(224,224),
shuffle= False,
class_mode= "sparse"
)
valid_ds = image_dataset_from_directory(
valid_dir,
image_size=(224,224),
shuffle=False,
)
test_ds = image_dataset_from_directory(
test_dir,
image_size=(224,224),
shuffle=False,
)
This is the importing of the exception convolution base.
conv_base = keras.applications.Xception(include_top=False, weights="imagenet", input_shape=(224,224,3))
conv_base.trainable = False
This is the model building function.
def pre_trained():
inputs = keras.Input(shape=(224,224,3))
#x = data_augmentation(inputs)
x = keras.applications.xception.preprocess_input(inputs)
x = conv_base(x)
x = layers.GlobalAveragePooling2D()(x)
x = layers.BatchNormalization()(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(5, activation = "softmax")(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy", metrics = ["accuracy"])
return model
Training function call
history = pre_trained_model.fit(train_ds, epochs=25)
This is the picture of the epochs.
While the exact cause for this remains unclear to me, I have found where the problem occurred, and the solution for it.
I added some parameters in the dataset generator function.
train_dir = "House_Dataset/Train"
valid_dir = "House_Dataset/Valid"
test_dir = "House_Dataset/Test"
train_ds = image_dataset_from_directory(
train_dir,
image_size=(224,224),
shuffle= True,
seed=1,
labels="inferred",
label_mode = "categorical"
)
valid_ds = image_dataset_from_directory(
valid_dir,
image_size=(224,224),
shuffle=True,
seed=1,
labels="inferred",
label_mode = "categorical"
)
test_ds = image_dataset_from_directory(
test_dir,
image_size=(224,224),
shuffle=True,
seed=1,
labels="inferred",
label_mode = "categorical"
)
I added the option to shuffle with some seed, and changed the label mode to categorical, which will produce a one-hot encoding of labels. Likewise I also changed the loss from sparse_categorical_crossentropy to categorical_crossentropy. These changes have allowed the model train, and there have been significant improvements in both training and validation loss as well as accuracy.
try my cnn network and see if you get 87% accuracy. cnn extract features in each layer as a filter. the filter then feeds to the category softmax function.
model=Sequential()
model.add(Conv2D(32, (3,3),activation='relu',input_shape=(IMG_SIZE,IMG_SIZE,3)))
#model.add(Dropout(0.25))
model.add(MaxPooling2D(2))
#model.add(BatchNormalization())
model.add(Conv2D(64, (3,3), activation="relu"))
model.add(MaxPooling2D(2,2))
model.add(Conv2D(128, (3,3), activation="relu"))
model.add(MaxPooling2D(2,2))
model.add(Conv2D(128, (3,3), activation="relu"))
model.add(MaxPooling2D(2,2))
#model.add(BatchNormalization())
model.add(Flatten())
model.add(Dropout(0.2))
model.add(Dense(512,activation='relu'))
model.add(Dense(5, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics = ['accuracy'])
model.summary()
I want to create a machine learning model with Tensorflow which detects flowers. I went in the nature and took pictures of 4 different species (~600 per class, one class got 700).
I load these images with Tensorflow Train Generator:
train_datagen = ImageDataGenerator(rescale=1./255,
shear_range=0.2,
zoom_range=0.15,
brightness_range=[0.7, 1.4],
fill_mode='nearest',
vertical_flip=True,
horizontal_flip=True,
rotation_range=15,
width_shift_range=0.1,
height_shift_range=0.1,
validation_split=0.2)
train_generator = train_datagen.flow_from_directory(
pfad,
target_size=(imageShape[0],imageShape[1]),
batch_size=batchSize,
class_mode='categorical',
subset='training',
seed=1,
shuffle=False,
#save_to_dir=r'G:\test'
)
validation_generator = train_datagen.flow_from_directory(
pfad,
target_size=(imageShape[0],imageShape[1]),
batch_size=batchSize,
shuffle=False,
seed=1,
class_mode='categorical',
subset='validation')
Then I am creating a simple model looking like this:
model = tf.keras.Sequential([
keras.layers.Conv2D(128, (3,3), activation='relu', input_shape=(imageShape[0], imageShape[1],3)),
keras.layers.MaxPooling2D(2,2),
keras.layers.Dropout(0.5),
keras.layers.Conv2D(256, (3,3), activation='relu'),
keras.layers.MaxPooling2D(2,2),
keras.layers.Conv2D(512, (3,3), activation='relu'),
keras.layers.MaxPooling2D(2,2),
keras.layers.Flatten(),
keras.layers.Dense(280, activation='relu'),
keras.layers.Dense(4, activation='softmax')
])
opt = tf.keras.optimizers.SGD(learning_rate=0.001,decay=1e-5)
model.compile(loss='categorical_crossentropy',
optimizer= opt,
metrics=['accuracy'])
And want to start the training process (CPU):
history=model.fit(
train_generator,
steps_per_epoch = train_generator.samples // batchSize,
validation_data = validation_generator,
validation_steps = validation_generator.samples // batchSize,
epochs = 200,callbacks=[checkpoint,early,tensorboard],workers=-1)
The result should be that my validation Accuracy improves, but it starts with 0.3375 and stays at this level the whole training process. Validation loss (1.3737) decreases by 0.001. Accuracy start with 0.15 but increases.
Why is my validation accuracy stuck?
Am I using the right loss? Or do I build my model wrong? Is my Tensorflow Train Generator hot encoding the labels?
Thanks
I solved the problem by using RMSprop() without any parameters.
So I changed from:
opt = tf.keras.optimizers.SGD(learning_rate=0.001,decay=1e-5)
model.compile(loss='categorical_crossentropy',optimizer= opt, metrics=['accuracy'])
to:
opt = tf.keras.optimizers.RMSprop()
model.compile(loss='categorical_crossentropy',
optimizer= opt,
metrics=['accuracy'])
This is a similar example, except that for 4 categorical classes, the below is binary. You may want to change the loss to categorical cross entropy, class_mode from binary to categorical in the train and test generators and final dense layer activation to softmax. I am still able to use model.fit_generator()
image_dataGen = ImageDataGenerator(rotation_range=20,
width_shift_range=0.2,height_shift_range=0.2,shear_range=0.1,
zoom_range=0.1,fill_mode='nearest',horizontal_flip=True,
vertical_flip=True,rescale=1/255)
train_images = image_dataGen.flow_from_directory(train_path,target_size = image_shape[:2],
color_mode = 'rgb',class_mode = 'binary')
test_images = image_dataGen.flow_from_directory(test_path,target_size = image_shape[:2],
color_mode = 'rgb',class_mode = 'binary',
shuffle = False)
model = Sequential()
model.add(Conv2D(filters = 32, kernel_size = (3,3),input_shape = image_shape,activation = 'relu'))
model.add(MaxPool2D(pool_size = (2,2)))
model.add(Conv2D(filters = 48, kernel_size = (3,3),input_shape = image_shape,activation = 'relu'))
model.add(MaxPool2D(pool_size = (2,2)))
model.add(Flatten())
model.add(Dense(units = 128,activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(units = 1, activation = 'sigmoid'))
model.compile(loss = 'binary_crossentropy',metrics = ['accuracy'], optimizer = 'adam')
results = model.fit_generator(train_images, epochs = 10, callbacks = [early_stop],
validation_data = test_images)
Maybe your learning rate is too high.
Use learning rate = 0.000001 and if that does not work then try another optimizer like Adam.
use model.fit_generator() instead of model.fit() Also below points could be helpful.
In order to use .flow_from_directory, you must organize the images in sub-directories. This is an absolute requirement, otherwise the method won't work. The directories should only contain images of one class, so one folder per class of images. Also could you check if the path for the training data and test data is correct ? They cannot point to the same location. I have used the ImageGenerator class for classification problem. You can also try changing the optimizer to 'Adam'
Structure Needed:
Image Data Folder
Class 1
0.jpg
1.jpg
...
Class 2
0.jpg
1.jpg
...
...
Class n
I am using the pre-trained VGG 16 model available with Keras and applying it on the SVHN dataset which is a dataset of 10 classes of number 0 - 10. The network is not learning and has been stuck at 0.17 accuracy. There is something that I am doing incorrectly but I am unable to recognise it. The way I am running my training is as follows:
import tensorflow.keras as keras
## DEFINE THE MODEL ##
vgg16 = keras.applications.vgg16.VGG16()
model = keras.Sequential()
for layer in vgg16.layers:
model.add(layer)
model.layers.pop()
for layer in model.layers:
layer.trainable = False
model.add(keras.layers.Dense(10, activation = "softmax"))
## START THE TRAINING ##
train_optimizer_rmsProp = keras.optimizers.RMSprop(lr=0.0001)
model.compile(loss="categorical_crossentropy", optimizer=train_optimizer_rmsProp, metrics=['accuracy'])
batch_size = 128*1
data_generator = keras.preprocessing.image.ImageDataGenerator(
rescale = 1./255
)
train_generator = data_generator.flow_from_directory(
'training',
target_size=(224, 224),
batch_size=batch_size,
color_mode='rgb',
class_mode='categorical'
)
validation_generator = data_generator.flow_from_directory(
'validate',
target_size=(224, 224),
batch_size=batch_size,
color_mode='rgb',
class_mode='categorical')
history = model.fit_generator(
train_generator,
validation_data = validation_generator,
validation_steps = math.ceil(val_split_length / batch_size),
epochs = 15,
steps_per_epoch = math.ceil(num_train_samples / batch_size),
use_multiprocessing = True,
workers = 8,
callbacks = model_callbacks,
verbose = 2
)
What is it that I am doing wrong? Is there something that I am missing? I was expecting a very high accuracy since it is carrying weights from imagenet but it is stuck at 0.17 accuracy from the first epoch.
I assume you're upsampling the 32x32 MNIST-like images to fit the VGG16 input, what you should actually do in this case is to remove all the dense layers, this way you can input any image size as in convolutional layers the weights are agnostic to the image size.
You can do this like:
vgg16 = keras.applications.vgg16.VGG16(include_top=False, input_shape=(32, 32))
Which I consider should be the default behaviour of the constructor.
When you upsample the image, best case scenario you're basically blurring it, in this case you have to consider that a single pixel of the original image corresponds to 7 pixels of the upsampled one, while VGG16's filters are 3 pixels wide, so in other words you're losing the image's features.
It is not necessary to add 3 dense layers at the end like the original VGG16, you can try with the same layer you have in your code.
I have a wierd situation in Keras and it really freaks me out.
I am trying to train a CNN using pretrained Inception with additional convolution, global average pool and dense layers. I am using a ImageDataGenerator to load the data.
The data generator is working fine, I have tested that. The model compiles well also. But when I run fit_generator, no output is printed, the CPU is at 100% and memory starts filling up slowly until it overflows. And although I have a GPU and have worked with it in tensorflow (which is the backend here) a number of times, it is completely ignored by Keras.
Considering that maybe batch size could be a problem, I set it to 1 but it did not solve the issue. The images are of size 299x299, which is not that big anyway.
I will post the code below as a reference though it seems to me that nothing is wrong with it:
def get_datagen():
return ImageDataGenerator(rotation_range=30,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
fill_mode='nearest'
)
# Setup and compile the model.
model = InceptionV3(include_top=False, input_shape=(None, None, 3))
# Set the model layers to be untrainable
for layer in model.layers:
layer.trainable = False
x = model.output
x = Conv2D(120, 5, activation='relu')(x)
x = GlobalAveragePooling2D()(x)
predictions = Activation('softmax')(x)
model_final = Model(inputs=model.inputs, outputs=predictions)
model_final.compile(optimizer='adam', loss='categorical_crossentropy',metrics=['accuracy'])
# Define the dataflow.
train_gen = get_datagen()
val_test_gen = get_datagen()
train_data = train_gen.flow_from_directory(train_folder, target_size=(299, 299), batch_size=1)
val_data = val_test_gen.flow_from_directory(validation_folder, target_size=(299, 299), batch_size=1)
test_data = val_test_gen.flow_from_directory(test_folder, target_size=(299, 299), batch_size=1)
train_size = train_data.n
val_size = val_data.n
test_size = test_data.n
# Define callbacks.
model_checkpoint = ModelCheckpoint('../models/dbc1/', monitor='val_accuracy', verbose=1, save_best_only=True)
early_stopping = EarlyStopping(monitor='val_accuracy', patience=3, verbose=1, mode='max')
tensorboard = TensorBoard(log_dir='../log/dbc1', histogram_freq=1, write_grads=True, )
model_final.fit_generator(train_data, steps_per_epoch=1, epochs=100,
callbacks=[model_checkpoint, early_stopping, tensorboard],
validation_data=val_data, verbose=1)
EDIT
It seems the tensorboard callback was the problem here. When I remove it, everything works. Does anyone know why this is happening?
There's seems to be a problem (possibly related to keras#3358) when using the histogram_freq=1 under certain conditions.
You could try to set histogram_freq=0 and submit an issue at keras repository. You wouldn't have the gradient histograms, but at least you would be able to train:
model.fit(...,
callbacks=[
TensorBoard(log_dir='./logs/', batch_size=batch_size),
...
])
I notice that this problem doesn't happen with all trained models. If InceptionV3 usage is not a requirement, I recommend you switching to another model. So far, I found that the following code (adapted from yours, using VGG19) works on keras==2.1.2, tensorflow==1.4.1:
from keras.applications import VGG19
from keras.applications.vgg19 import preprocess_input
input_shape = (224, 224, 3)
batch_size = 1
model = VGG19(include_top=False, input_shape=input_shape)
for layer in model.layers:
layer.trainable = False
x, y = model.input, model.output
y = Conv2D(2, 5, activation='relu')(y)
y = GlobalAveragePooling2D()(y)
y = Activation('softmax')(y)
model = Model(inputs=model.inputs, outputs=y)
model.compile('adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
g = ImageDataGenerator(rotation_range=30,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
preprocessing_function=preprocess_input)
train_data = g.flow_from_directory(train_folder,
target_size=input_shape[:2],
batch_size=batch_size)
val_data = g.flow_from_directory(validation_folder,
target_size=input_shape[:2],
batch_size=batch_size)
test_data = g.flow_from_directory(test_folder,
target_size=input_shape[:2],
batch_size=batch_size)
model.fit_generator(train_data, steps_per_epoch=1, epochs=100,
validation_data=val_data, verbose=1,
callbacks=[
ModelCheckpoint('./ckpt.hdf5',
monitor='val_accuracy',
verbose=1,
save_best_only=True),
EarlyStopping(patience=3, verbose=1),
TensorBoard(log_dir='./logs/',
batch_size=batch_size,
histogram_freq=1,
write_grads=True)])