I have built and tested two convolutional Neural Network models (VGG-16 and 3-layer CNN) to predict classification of lung CT scans for COVID-19.
Prior to the classification, I've performed image segmentation via k-means clustering on images to try to improve the classification performance.
The segmented images look like below.
And I've trained and evaluated VGG-16 model on both segmented images and raw images separately. And lastly, trained and evaluated a 3-layer CNN on the segmented images only. Below is the results for their train/validation loss and accuracy.
For the simple 3-layer CNN model, I can clearly see that the model is trained well and also it starts to overfit once epochs are over 2. But, I don't understand how validation accuracy of the VGG model doesn't look like an exponential curve instead it looks like a horizontally straight line or a fluctuating horizontal line.
And besides, the simple 3-layer CNN models seems to perform better. Is this due to gradient vanishing in VGG model ? Or the image itself is simple that deep architecture doesn't benefit?
I'd appreciate if you could share your knowledge on such learning behaviour of the models.
This is the code for the VGG-16 model:
# build model
img_height = 256
img_width = 256
model = Sequential()
model.add(Conv2D(input_shape=(img_height,img_width,1),filters=64,kernel_size=(3,3),padding="same", activation="relu"))
model.add(Conv2D(filters=64,kernel_size=(3,3),padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Flatten())
model.add(Dense(units=4096,activation="relu"))
model.add(Dense(units=4096,activation="relu"))
model.add(Dense(units=1, activation="sigmoid"))
opt = Adam(lr=0.001)
model.compile(optimizer=opt, loss=keras.losses.binary_crossentropy, metrics=['accuracy'])
And this is a code for the 3-layer CNN.
# build model
model2 = Sequential()
model2.add(Conv2D(32, 3, padding='same', activation='relu',input_shape=(img_height, img_width, 1)))
model2.add(MaxPool2D())
model2.add(Conv2D(64, 5, padding='same', activation='relu'))
model2.add(MaxPool2D())
model2.add(Flatten())
model2.add(Dense(128, activation='relu'))
model2.add(Dense(1, activation='sigmoid'))
opt = Adam(lr=0.001)
model2.compile(optimizer=opt, loss=keras.losses.binary_crossentropy, metrics=['accuracy'])
Thank you!
Looking at the accuracies for an assumed to be binary problem you can observe that the model is just random guessing (acc ~ 0.5).
The fact that your 3-layer model gives much better results on the train set indicates that you are not training long enough to overfit.
In addition you do not seem to use a proper initalization of the NN. Note: at the beginning of an implementation process overfitting is indicating that implementation training just works fine. Hence it is a good thing in this phase.
Therefore, first step would be to get the model overfitting. You seem to train from scratch. In that case it can take a few 100 epochs until the gradients impact the first convolutions on a complex model like VGG16.
As the 3Layer CNN seems to overfit quite heavily I conclude that your dataset is rather small.
Hence, I would recommend to start from a pre-trained model (VGG16) and just re-train the last two layers. This should give much better result.
As per what #CAFEBABE suggested, I have tried two approaches. First, I have increased epochs size to 200, changed optimiser to SGD and reduced learning rate down to 1e-5.
And second, I have implemented pre-trained weights for the VGG-16 model and only trained the last two convolutional layers. Below is the plot displaying the tuned VGG-16 model, the pre-trained VGG-16 model and the 3-layer CNN model (from top to bottom).
Certainly, tuning had an effect on the performance but it was very marginal. I guess the learnable features from the dataset with ~600 images were not sufficient enough to train the model. And the pre-trained model significantly benefitted the model reaching overfitting at ~25 epochs. However, in comparion with the 3-layer CNN model, the testing accuracies of these two models are similar ranging between 0.7 and 0.8. I guess this is again due to the limitation of the datasets.
Thanks again to #CAFEBABE for helping my problem and I hope this can help other people who might face similar problem as I did.
Related
The shape of the train/test data is (samples, 256, 256, 1). The training dataset has around 1400 samples, the validation dataset has 150 samples, and the test dataset has 250 samples. Then I build a CNN model for a six-object classification task. However, no matter how hard I tuning the parameters and add/remove layers(conv&dense), I get a chance level of accuracy all the time (around 16.5%). Thus, I would like to know whether I made some deadly mistakes while building the model. Or there is something wrong with the data itself, not the CNN model.
Code:
def build_cnn_model(input_shape, activation='relu'):
model = Sequential()
# 3 Convolution layer with Max polling
model.add(Conv2D(64, (5, 5), activation=activation, padding = 'same', input_shape=input_shape))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(128, (5, 5), activation=activation, padding = 'same'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(256, (5, 5), activation=activation, padding = 'same'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
# 3 Full connected layer
model.add(Dense(1024, activation = activation))
model.add(Dropout(0.5))
model.add(Dense(512, activation = activation))
model.add(Dropout(0.5))
model.add(Dense(6, activation = 'softmax')) # 6 classes
# summarize the model
print(model.summary())
return model
def compile_and_fit_model(model, X_train, y_train, X_vali, y_vali, batch_size, n_epochs, LR=0.01):
# compile the model
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=LR),
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy'])
# fit the model
history = model.fit(x=X_train,
y=y_train,
batch_size=batch_size,
epochs=n_epochs,
verbose=1,
validation_data=(X_vali, y_vali))
return model, history
I transformed the MEG data my professor recorded into Magnitude Scalogram using CWT. pywt.cwt(data, scales, wavelet) was used. And if I plot the coefficients I got from cwt, I will have a graph like this (I emerged 62 channels into one graph). enter image description here
I used the coefficients as train/test data for the CNN model. However, I tuned the parameters and tried to add/remove layers for the CNN model, and the classification accuracy was unchanged. Thus, I want to know where I made mistakes. Did I make mistakes with building the CNN model, or did I make mistakes with CWT (the way I handled data)?
Please give me some advices, thank you.
How is the accuracy of the training data? If you have a small dataset and the model does not overfit after training for a while, then something is wrong with the model. You can also test with existing datasets, which the model should be able to handle (like Fashion MNIST).
Testing if you handled the data correctly is harder. Did you write unit tests for the different steps in the preprocessing pipeline?
I am trying to use CNN for feature extraction and XGboost for classification of a image data. I researched and found that it could be done by extracting the data after the convolution layers. I found some source code for similar problem and tried on my own.
model = Sequential()
model.add(Conv2D(32, kernel_size=(3,3), strides=(1,1), padding='same', activation="relu", input_shape = data.shape[1:]))
model.add(MaxPool2D(pool_size=(2,2), strides=(2,2)))
model.add(Conv2D(64, kernel_size=(3,3), padding='same', strides=(1,1), activation="relu") )
model.add(MaxPool2D(pool_size=(2,2), strides=(2,2))) #max pool window 2x2
model.add(Conv2D(128, kernel_size=(3,3), padding='same', strides=(1,1), activation="relu"))
model.add(MaxPool2D(pool_size=(2,2), strides=(2,2))) #max pool window 2x2
model.add(Conv2D(256, kernel_size=(3,3), padding='same', strides=(1,1), activation="relu"))
model.add(MaxPool2D(pool_size=(2,2), strides=(2,2))) #max pool window 2x2
model.add(Flatten())
model.add(Dense(128, activation="relu", name='firstDenseLayer'))
model.add(Dense(1, activation="sigmoid"))
# model.summary()
# print(model)
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
model.fit(data, label, batch_size=16, epochs=10, validation_data=(val_data, val_label))
Below i accessed the dense layer named "firstDenseLayer".
import xgboost as xgb
from keras.models import Model
layerName = 'firstDenseLayer'
intermediate_layer_model = Model(inputs=model.input,
outputs=model.get_layer(layerName).output)
intermediate_output = intermediate_layer_model.predict(data)
from xgboost import XGBClassifier
xgbmodel = XGBClassifier(objective='multi:softmax', num_class= 2)
xgbmodel.fit(intermediate_output, label)
xgbmodel.score(intermediate_output, label)
As i am new in this, i have several confusions.
How the data is being flowed. After i extract the features of the pictures via convolution layers, how do i actually access the data from there?
What is this line of code doing? What data is it extracting?
intermediate_output = intermediate_layer_model.predict(data)
When i omit(keep commented out) the below line,
model.fit(data, label, batch_size=16, epochs=10, validation_data=(val_data, val_label))
from the first snippet and run the XGboost model directly the XGboost gives low accuracy and when i don't it gives higher accuracy. Why is it being like that?
Kindly help me out. I am stuck with this for quite a while. I am just trying to access the extracted features data from the last convolution layer and use that data to do classification using XGboost. As i tried to follow the method that i found from online, i am not sure if it is the the only way of doing it. If there is another way kindly let me know.
The model.fit(...) line does what you would expect, it trains the convnet defined by model on some data and labels. Your classifier yielding lower accuracy when you're using randomly initialized weights (i.e. without running fit) is not surprising.
intermediate_layer_model is constructed as a keras model whose output is the dense layer just before the output of model. Note the name parameter given to the dense layer in the construction of model.
You could just as easily give a name to one of the Conv2D layers and access it the same way. Alternatively, you could store the layer in a python variable, i.e. instead of
model.add(Conv2D(256, kernel_size=(3,3), padding='same', strides=(1,1), activation="relu"))
in the model construction it could say
last_conv_layer = Conv2D(256, kernel_size=(3,3), padding='same', strides=(1,1), activation="relu")
model.add(last_conv_layer)
Then for the intermediate_layer_model you put
intermediate_layer_model = Model(inputs=model.input, outputs=last_conv_layer.output)
I'm doing a project where a python script used a convolutional neural network to determine if a plant is healthy, and then water it based on that. While training the CNN, it seems to get up to 100% accuracy quite early, although it isn't accurate. I only have a little less than 2000 images, and was wondering if I didn't have enough, or it was my model, which is here
self.model = Sequential()
self.model.add(Conv2D(numFilters, filterSize, activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)))
self.model.add(Conv2D(numFilters * 2, (3, 3), activation='relu'))
self.model.add(MaxPooling2D(pool_size=poolSize))
self.model.add(Dropout(0.25))
self.model.add(Flatten())
self.model.add(Dense(numFilters * 4, activation='relu'))
self.model.add(Dropout(0.5))
self.model.add(Dense(2, activation='softmax'))
self.model.compile(loss='categorical_crossentropy',
optimizer = 'adam',
metrics=['accuracy'])
I would just like to know the reason why it doesn't train well.
Thanks.
I'm researching the possibility of implementing a CNN in order to classify images as "good" or "bad" but am having no luck with my current architecture.
Characteristics that denote a "bad" image:
Overexposure
Oversaturation
Incorrect white balance
Blurriness
Would it be feasible to implement a neural network to classify images based on these characteristics or is it best left to a traditional algorithm that simply looks at the variance in brightness/contrast throughout an image and classifies it that way?
I have attempted training a CNN using the VGGNet architecture but I always seem to get a biased and unreliable model, regardless of the number of epochs or number of steps.
Examples:
My current model's architecture is very simple (as I am new to the whole machine learning world) but seemed to work fine with other classification problems, and I have modified it slightly to work better with this binary classification problem:
# CONV => RELU => POOL layer set
# define convolutional layers, use "ReLU" activation function
# and reduce the spatial size (width and height) with pool layers
model.add(Conv2D(32, (3, 3), padding="same", input_shape=input_shape)) # 32 3x3 filters (height, width, depth)
model.add(Activation("relu"))
model.add(BatchNormalization(axis=channel_dimension))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25)) # helps prevent overfitting (25% of neurons disconnected randomly)
# (CONV => RELU) * 2 => POOL layer set (increasing number of layers as you go deeper into CNN)
model.add(Conv2D(64, (3, 3), padding="same", input_shape=input_shape)) # 64 3x3 filters
model.add(Activation("relu"))
model.add(BatchNormalization(axis=channel_dimension))
model.add(Conv2D(64, (3, 3), padding="same", input_shape=input_shape)) # 64 3x3 filters
model.add(Activation("relu"))
model.add(BatchNormalization(axis=channel_dimension))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25)) # helps prevent overfitting (25% of neurons disconnected randomly)
# (CONV => RELU) * 3 => POOL layer set (input volume size becoming smaller and smaller)
model.add(Conv2D(128, (3, 3), padding="same", input_shape=input_shape)) # 128 3x3 filters
model.add(Activation("relu"))
model.add(BatchNormalization(axis=channel_dimension))
model.add(Conv2D(128, (3, 3), padding="same", input_shape=input_shape)) # 128 3x3 filters
model.add(Activation("relu"))
model.add(BatchNormalization(axis=channel_dimension))
model.add(Conv2D(128, (3, 3), padding="same", input_shape=input_shape)) # 128 3x3 filters
model.add(Activation("relu"))
model.add(BatchNormalization(axis=channel_dimension))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25)) # helps prevent overfitting (25% of neurons disconnected randomly)
# only set of FC => RELU layers
model.add(Flatten())
model.add(Dense(512))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.5))
# sigmoid classifier (output layer)
model.add(Dense(classes))
model.add(Activation("sigmoid"))
Is there any glaring omissions or mistakes with this model or can I simply not solve this problem using deep learning (with my current GPU, a GTX 970)?
Thanks for your time and experience,
Josh
EDIT:
Here is my code for compiling/training the model:
# initialise the model and optimiser
print("[INFO] Training network...")
opt = SGD(lr=initial_lr, decay=initial_lr / epochs)
model.compile(loss="sparse_categorical_crossentropy", optimizer=opt, metrics=["accuracy"])
# set up checkpoints
model_name = "output/50_epochs_{epoch:02d}_{val_acc:.2f}.model"
checkpoint = ModelCheckpoint(model_name, monitor='val_acc', verbose=1,
save_best_only=True, mode='max')
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,
patience=5, min_lr=0.001)
tensorboard = TensorBoard(log_dir="logs/{}".format(time()))
callbacks_list = [checkpoint, reduce_lr, tensorboard]
# train the network
H = model.fit_generator(training_set, steps_per_epoch=500, epochs=50, validation_data=test_set, validation_steps=150, callbacks=callbacks_list)
Independently of any other advice (including the answer already provided), and assuming classes=2 (which you don't clarify - there is a reason we ask for a MCVE here), you seem to perform a fundamental mistake in your final layer, i.e.:
# sigmoid classifier (output layer)
model.add(Dense(classes))
model.add(Activation("sigmoid"))
A sigmoid activation is suitable only if your final layer consists of a single node; if classes=2, as I suspect, based also on your puzzling statement in the comments that
with three different images, my results are 0.987 bad and 0.999 good
and
I was giving you the predictions from the model previously
you should use a softmax activation, i.e.
model.add(Dense(classes))
model.add(Activation("softmax"))
Alternatively, you could use sigmoid, but your final layer should consist of a single node, i.e.
model.add(Dense(1))
model.add(Activation("sigmoid"))
The latter is usually preferred in binary classification settings, but the results should be the same in principle.
UPDATE (after updating the question):
sparse_categorical_crossentropy is not the correct loss here, either.
All in all, try the following changes:
model.compile(loss="binary_crossentropy", optimizer=Adam(), metrics=["accuracy"])
# final layer:
model.add(Dense(1))
model.add(Activation("sigmoid"))
with Adam optimizer (needs import). Also, dropout should not be used by default - see this thread; start without it and only add if necessary (i.e. if you see signs of overfitting).
I suggest you go for transfer learning instead of training the whole network.
use the weights trained on a huge Dataset like ImageNet
you can easily do this using Keras you just need to import model with weights like xception and remove last layer which represents 1000 classes of imagenet dataset to 2 node dense layer cause you have only 2 classes and set trainable=False for the base layer and trainable=True for custom added layers like dense layer having node = 2.
and you can train the model as usual way.
Demo code -
from keras.applications import *
from keras.models import Model
base_model = Xception(input_shape=(img_width, img_height, 3), weights='imagenet', include_top=False
x = base_model.output
x = GlobalAveragePooling2D()(x)
predictions = Dense(2, activation='softmax')(x)
model = Model(base_model.input, predictions)
# freezing the base layer weights
for layer in base_model.layers:
layer.trainable = False
So I've been building a convolutional neural network. I'm trying to predict whether a boardgame state (10x10 matrix) will lead to a win (binary 0 or 1) or not.
I have six million examples, which you would think would be enough, but clearly not, as my network is predicting all of one class...
Is there something obvious I'm missing? I tried giving it even 10 examples and it still predicts them all as the same class.
The input matrices are 10x10 of integers.
Input reshaping:
x_train = x_train.reshape(len(x_train),10,10,1)
Actual model building:
model = Sequential()
model.add(Conv2D(3, kernel_size=(1, 1), strides=(1, 1), activation='relu', input_shape=(10,10,1)))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(1, 1)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(500, activation='tanh'))
model.add(Dropout(0.5))
model.add(keras.layers.Dense(75, activation='relu'))
model.add(BatchNormalization())
model.add(keras.layers.Dense(10, activation='sigmoid'))
model.add(keras.layers.Dense(1,kernel_initializer='normal',activation='sigmoid'))
optimizerr = keras.optimizers.SGD(lr=0.001, momentum=0.9, decay=0.01, nesterov=True)
model.compile(optimizer=optimizerr, loss='binary_crossentropy', metrics=[metrics.binary_accuracy])
model.fit(x_train, y_train,epochs = 100, batch_size = 128, verbose=1)
I've tried modifying the learning rate, momentum, decay, the kernel_sizes, layer types, sizes... I checked for dying relu and that didn't seem to be the problem. Removing the dropout/batch normalization layers (or various random layers) didn't do anything either.
The data have roughly 53/47% split across the labels, so it's not that either.
I'm more confused because even when I ask it to predict the train set, it STILL insists on only labeling things one class, even if there are only ~20 samples or fewer.