Keras CNN model for image classification does not generalize well - python

I want to implement a model in keras for sentiment classification(anger or non anger) based on spectograms. I have generated the spectograms using the audio dataset from Friends. Each spectogram has a length of 8 seconds. In total, I have 9117 train samples, 1006 validation samples and 2402 test samples.
I use a relatively simple CNN architecture and I tried different combinations of it + optimizer + learning rate + batch size but none of the results seem to generalize well...The loss decreases nicely till a certain point but the validation loss increases by each epoch.
This is the model I am using:
model = Sequential()
model.add(Convolution2D(filters=32, kernel_size=3, strides=1,input_shape=input_shape, activation='relu', padding="same"))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(filters=64, kernel_size=3, strides=1, activation='relu', padding="same"))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Convolution2D(filters=128, kernel_size=3, strides=1, activation='relu', padding="same"))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(classes, activation='sigmoid')) #output layer
This is how I load the images:
img_rows = 120
img_cols = 160
train_datagen = ImageDataGenerator(rescale=1./255)
validation_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
SPECTOGRAMS_DIRECTORY + TRAIN_SUBDIR,
target_size=(img_cols, img_rows),
batch_size=batch_size,
class_mode='binary')
validation_generator = validation_datagen.flow_from_directory(
SPECTOGRAMS_DIRECTORY + VALIDATION_SUBDIR,
target_size=(img_cols, img_rows),
batch_size=batch_size,
class_mode='binary')
test_generator = test_datagen.flow_from_directory(
SPECTOGRAMS_DIRECTORY + TEST_SUBDIR,
target_size=(img_cols, img_rows),
batch_size=1,
class_mode='binary',
shuffle=False)
input_shape=(img_cols, img_rows, channels)
opt = SGD(lr=0.001)
model.compile(loss='binary_crossentropy',
optimizer=opt,
metrics=['accuracy'])
history = model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size,
verbose=2)
##EVALUATE
print("EVALUATE THE MODEL...")
score = model.evaluate_generator(generator=validation_generator,
steps=nb_validation_samples // batch_size)
The spectograms look like this:
As I said, I tried using different combinations of batch size (16,32,64), SGD with 0.001 learning rate, Adam with 0.0001 learning rate, but for each combination the training loss goes down while the validation loss goes up.

Model seems to be over-fitting. You can try the below approaches to overcome this issue.
If possible try to gather more data or you can use data augmentation techniques to increase the number of samples.
You can use dropout in Keras to reduce the over-fitting. (Looks like you have already added dropout, you can try tuning the values)
Thank you

Related

CNN accuracy: 0.0000e+00 for multi-classification on images

I have the following code that produces my horrible accuracy dilema, has anyone else encountered this issue for multi classification task(49 different images to classify)?
I am running resnet50 on top of my CNN model with softmax as last activation FN, my loss is categorical_crossentropy and my optimizer is Adam.
What might I be doing wrong?
## Build CNN architecture
model1 = Sequential()
model1.add(Conv2D(32, (3,3), strides=1, input_shape = (720, 720, 3)))
model1.add(Activation('relu'))
model1.add(Conv2D(32, (3,3), strides=1, padding="same"))
model1.add(Activation('relu'))
model1.add(MaxPooling2D(pool_size=(2,2)))
model1.add(Conv2D(64, (3,3), strides=1, padding="same"))
model1.add(Activation('relu'))
model1.add(Conv2D(64, (3,3), strides=1, padding="same"))
model1.add(Activation('relu'))
model1.add(MaxPooling2D(pool_size=(2,2)))
model1.add(Flatten())
model1.add(Dense(200))
model1.add(Activation('relu'))
model1.add(Dense(200))
model1.add(Dropout(0.24))
model1.add(Activation('relu'))
model1.add(Dense(49, activation='softmax'))
model1.summary()
# Image data generator for on the fly image augmentation
directory = '/home/carlini-TF2/data/train/'
batch_size = 64
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=90.,
shear_range=0.2,
zoom_range=[0.8,1.2],
horizontal_flip=True,
validation_split=0.2,
preprocessing_function=tf.keras.applications.resnet50.preprocess_input)
train_generator = train_datagen.flow_from_directory(directory=directory,
subset='training',
target_size=(720, 720),
shuffle=True,
seed=42,
color_mode='rgb',
class_mode='categorical',
batch_size=batch_size)
valid_directory = '/home/carlini-TF2/data/test/'
valid_generator = train_datagen.flow_from_directory(directory=valid_directory,
target_size=(720, 720),
color_mode="rgb",
batch_size=batch_size,
class_mode="categorical",
subset='validation',
shuffle=True,
seed=42)
## Compile and train Neural Network
METRICS = [
tf.keras.metrics.Accuracy(name='accuracy'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall')]
# optimal optimizer FN | loss FN to work with accuracy metric
model1.compile(loss=tf.keras.losses.CategoricalCrossentropy(from_logits=False),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
metrics=METRICS)
# stop training when loss gets worse after consecutive epochs
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)
# fit model with augmented training set and validation set | shuffle batch
history = model1.fit(train_generator,
validation_data = valid_generator,
steps_per_epoch = train_generator.n//batch_size,
validation_steps = valid_generator.n//batch_size,
shuffle=True, callbacks = [callback],
epochs=50)
The issue is that ResNet50 was being used for data augmentation and not in the CNN architecture. In order to reach somewhat robust model the following code is needed.
We can throw out the previous architecture and use a very simple model and the ResNet50 since this gives conclusive results.
We must use Functional API since ResNet50 was built on it
data_bias = np.log(1802./4657)
initializer = tf.keras.initializers.Constant(data_bias)
resnet50_imagenet_model = tf.keras.applications.ResNet50(weights='imagenet', include_top=False, input_shape=(720,720,3) )
resnet50_imagenet_model.trainable = False
#Flatten output layer of Resnet
flattened = tf.keras.layers.Flatten()(resnet50_imagenet_model.output)
#Fully connected layer, output layer with 49 diff labels
fc2 = tf.keras.layers.Dense(49, activation='softmax', bias_initializer=initializer, name="AddedDense2")(flattened)
model1 = tf.keras.models.Model(inputs=resnet50_imagenet_model.input, outputs=fc2)

Model get 97% accuracy on train and validation but when use custom predict it get wrong

I use a CNN model to train image classification , it got great accuracy at test and validation (98% and 97%), but when use my image to predict it alway go wrong, here is my code:
BATCH_SIZE = 30
IMG_HEIGHT = 256
IMG_WIDTH = 256
STEPS_PER_EPOCH = np.ceil(image_count/BATCH_SIZE)
train_data_gen = image_generator.flow_from_directory(directory=str(data_dir),
batch_size=BATCH_SIZE,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
classes = list(CLASS_NAMES))
here is prepare for dataset and data argumentation:
imgDataGen=ImageDataGenerator(
validation_split=0.2,
rescale=1/255,
horizontal_flip=True,
zoom_range=0.3,
rotation_range=15.,
width_shift_range=0.1,
height_shift_range=0.1,
)
prepare data:
train_dataset = imgDataGen.flow_from_directory(
directory=str(data_dir),
target_size = (IMG_HEIGHT, IMG_WIDTH),
classes = list(CLASS_NAMES),
batch_size = BATCH_SIZE,
subset = 'training'
)
val_dataset = imgDataGen.flow_from_directory(
directory=str(data_dir),
target_size = (IMG_HEIGHT, IMG_WIDTH),
classes = list(CLASS_NAMES),
batch_size =BATCH_SIZE,
subset = 'validation'
)
the model
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same', input_shape=(256, 256, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(6, activation='sigmoid'))
complie:
model.compile(loss='binary_crossentropy',
optimizer=keras.optimizers.SGD(learning_rate=0.001,momentum=0.9),
metrics=['acc'])
train
history = model.fit_generator(
train_dataset,
validation_data = val_dataset,
workers=10,
epochs=20,
)
It get pretty high accuracy 98% on test and 97% on validation
but when i try with my code to predict
def prepare(filepath):
IMG_SIZE=256
img_array=cv2.imread(filepath)
new_array= cv2.resize(img_array,(IMG_SIZE,IMG_SIZE))
return new_array.reshape(1,IMG_SIZE,IMG_SIZE,3)
model=tf.keras.models.load_model('trained-model.h5',compile=False)
#np.set_printoptions(formatter={'float_kind':'{:f}'.format})
predict=model.predict([prepare('cat.jpg')])
pred_name = CATEGORIES[np.argmax(predict)]
print(pred_name)
it got wrong, with cat image it go for dog and dog for cat, but sometime it go right, just i think 98% is more accurate than this, if i try 5 image of cats it fail 3 or 4 images
so it because dataset or because of code?
please help, thanks
So in your second code-block you have this:
rescale=1/255
This is for normalizing your image into the range [0;1]. So every image gets rescaled (/normalized) before going through the network. But in you las code-block where you test it on an image you didnt add normalization. Try adding that to your "prepare" function:
def prepare(filepath):
IMG_SIZE = 256
img_array = cv2.imread(filepath)
# add this:
img_array = image_array / 255
new_array = cv2.resize(img_array,(IMG_SIZE,IMG_SIZE))
return new_array.reshape(1,IMG_SIZE,IMG_SIZE,3)

Overfitting problem with my validation data

I am applying CNN model using keras. I fed the details coefficients of discrete wavelet transform level 5 as a 2D array of size (5,3840) into the CNN.I would like to use CNN to predict seizure.The problem is my network is overfitting. Any suggestion on how to solve overfitting problem.
input_shape=(1, 22, 5, 3844)
model = Sequential()
#C1
model.add(Conv3D(16, (22, 5, 5), strides=(1, 2, 2), padding='same',activation='relu',data_format= "channels_first", input_shape=input_shape))
model.add(keras.layers.MaxPooling3D(pool_size=(1, 2, 2),data_format= "channels_first", padding='same'))
model.add(BatchNormalization())
#C2
model.add(Conv3D(32, (1, 3, 3), strides=(1, 1,1), padding='same',data_format= "channels_first", activation='relu'))#incertezza se togliere padding
model.add(keras.layers.MaxPooling3D(pool_size=(1,2, 2),data_format= "channels_first", ))
model.add(BatchNormalization())
#C3
model.add(Conv3D(64, (1,3, 3), strides=(1, 1,1), padding='same',data_format= "channels_first", activation='relu'))#incertezza se togliere padding
model.add(keras.layers.MaxPooling3D(pool_size=(1,2, 2),data_format= "channels_first",padding='same' ))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(256, activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))
opt_adam = keras.optimizers.Adam(lr=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(loss='categorical_crossentropy', optimizer=opt_adam, metrics=['accuracy'])
return model
There are 2 frequently used regularization techniques to avoid over-fitting:
L1 & L2 Regularization : Regularizers allow to apply penalties on layer parameters or layer activity during optimization. These penalties are incorporated in the loss function that the network optimizes.
from keras import regularizers
model.add(Dense(64, input_dim=64,
kernel_regularizer=regularizers.l2(0.01),
activity_regularizer=regularizers.l1(0.01)))
Dropout : Dropout consists in randomly setting a fraction rate of input units to 0 at each update during training time, which helps prevent
over-fitting.
from keras.layers import Dropout
model.add(Dense(60, input_dim=60, activation='relu'))
model.add(Dropout(rate=0.2))
model.add(Dense(30, activation='relu'))
model.add(Dropout(rate=0.2))
model.add(Dense(1, activation='sigmoid'))
Also you can use Early-Stopping to interrupt training when the validation loss isn't decreasing anymore
from keras.callbacks import EarlyStopping
early_stopping = EarlyStopping(monitor='val_loss', patience=2)
model.fit(x, y, validation_split=0.2, callbacks=[early_stopping])
Additionally, you might wanna consider Data-Augmentation techniques such as cropping, padding, and horizontal flipping. With these techniques, you can increase the diversity of your data available for your training model, without actually collecting new data. So you can capture data invariance and reduce over-fitting
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
y_train = np_utils.to_categorical(y_train, num_classes)
y_test = np_utils.to_categorical(y_test, num_classes)
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True)
model.fit_generator(datagen.flow(x_train, y_train, batch_size=32),
steps_per_epoch=len(x_train) / 32, epochs=epochs)
Steps to remove overfitting:
Reduce the number of neural units in your hidden layers
I do not think you need softmax layer after a sigmoid layer. Probably your model is overfitting because of that.
Try replacing sigmoid layer with a dense layer with relu activation and output (n, 2) followed by your softmax layer.
Your learning rate is very low as well, which suggests that your model will take long to find the global minima hence underfit, but that is not happening here. This solidifies my suspicion that sigmoid layer is the cause.

Training a binary CNN (Keras) - Slow training time

I am training a binary CNN in keras for classifying polarity of emotions (expression) e.g. Smiling/Not_smiling. this is my code. I am training this on multi-GPU machine, but surprised by how long this training takes. Each class binary model is taking 5-6 hours. Is this normal/expected?
I had previously trained a multi-class model combining all the classes and that took about 4 hours in total.
Note: each pos/neg class contains ~5000-10000 images.
Am I doing this right? Is this training duration expected?
class_names = ["smiling","frowning","surprised","sad"]
## set vars!
for cname in class_names:
print("[+] training: ",model_name,cname)
dp_path_train = './emotion_data/{0}/train/{1}'.format(model_name,cname)
dp_path_val = './emotion_data/{0}/val/{1}'.format(model_name,cname)
dir_checkpoint = './models'
G = 2 # no. of gpus to use
batch_size = 32 * G
step_size = 1000//G
print("[*] batch size & step size: ", batch_size,step_size)
model = Sequential()
model.add(Conv2D(32, kernel_size = (3, 3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Conv2D(96, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Conv2D(32, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(1, activation = 'sigmoid'))
model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory(dp_path_train,
target_size = (224, 224),
batch_size = batch_size,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory(dp_path_val,
target_size = (224, 224),
batch_size = batch_size,
class_mode = 'binary')
model.fit_generator(training_set,
steps_per_epoch = step_size,
epochs = 50,
validation_data = test_set,
validation_steps = 2000)
print("[+] saving model: ",model_name,cname)
model.save("./models2/{0}_{1}.hdf5".format(model_name,cname))
Removing all the BatchNormalization layers should help speed things up, or you can use it less frequently between your network architecture layers

How to check what features are extracted while training and testing a CNN model for image classification?

I'm using CNN for training and testing images of seeds. I want to know:
What features are getting extracted at every layer?
Is there any way to represent it in a graphical or image format?
How do I define my classifier to extract only specific features?
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
# dimensions of our images.
img_width, img_height = 150, 150
train_data_dir = 'Train_Walnut_Seed/train'
validation_data_dir = 'Train_Walnut_Seed/validation'
nb_train_samples = 70
nb_validation_samples = 9
epochs = 50
batch_size = 16
if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height, 3)
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
model.save('first_try_walnut.h5')
The above code is for training the classifier using CNN. how to visually represent the output at each layer while training.
Also how to deploy my trained model into a protocolbuffer(.pb) file for using it in my android project
I believe the best way, or at least the best way I know of to extract useful features would be using an autoencoder.
Check out this article from the Keras blog.
Cheers,
Gabriel
I know this probably isn't an issue anymore, but I just thought I'd add this in case it's useful to someone else. As the features output by a CNN aren't really human-readable it is difficult to inspect them. One way is to use t-SNE which gives a visual indication of which embedded representations of the images are close to each other. Another way to do this is using a 'heat map' which shows in more detail which parts of an image are activating parts of the CNN. This post has a nice explanation of some of these techniques: http://cs231n.github.io/understanding-cnn/
Getting a classifier to focus on certain features is difficult - either you need to change the network architecture or use image pre-processing to accentuate the features you want the network to focus on. I'm afraid I can't really give more details on that.

Categories

Resources