I`m using keras and TF 2.0
I'm trying to implement ResNet50 pre-trained on ImageNet to a different problem (pneumonia binary classification) and I've found that there is some discussion online about how to properly set batch normalization layers to do Fine-Tunning.
My question is if I should freeze all the layers in the model, or skip batch normalization layers to do proper fine tuning.By this I mean, if resnet is the pre-trained model
resnet.trainable = False
or
for layer in resnet.layers:
if not isinstance(layer, keras.layers.BatchNormalization):
layer.trainable = False
enter code here
I'm reaching 97% test accuracy but I think it should perform better in such a simple task. Which way of freezing should I use?
Related
I followed a blog on how to implement a vgg16-model from scratch and want to do the same with the pretrained model from Keras. I looked up some other blogs but can't find a fitting solution I think. My task is to classify integrated circuit images into defect or non defects.
I have seen on a paper that they used pretrained imagenet model of vgg16 for fabric defect detection, where they freezed the first seven layers and fine tuned the last nine for their own problem.
(Source: https://journals.sagepub.com/doi/full/10.1177/1558925019897396)
I have already seen examples on how to freeze all layers except the fully connected layers, but how can I try the example with freezing first x layers and fine tune the others for my problem?
The VGG16 is fairly easy to implement from scratch but for models like resnet or xception it is getting a little trickier.
It is not necessary to implement a model from scratch to freeze a few layers. You can do this on pre-trained models as well. In keras, you'd use trainable = False.
For example, let's say you want to use the pre-trained Xception model from keras and want to freeze the first x layers:
#In your includes
from keras.applications import Xception
#Since you're using the model for a different task, you'd want to remove the top
base_model = Xception(weights='imagenet', include_top=False)
#Freeze layers 0 to x
for layer in base_model.layers[0:x]:
layer.trainable = False
#To see all the layers in detail and to check trainable parameters
base_model.summary()
Ideally you'd want to add another layer on top of this model with the output as your classes. For more details, you can check this keras guide: https://keras.io/guides/transfer_learning/
A lot of times the pre-trained weights can be very useful in other classification tasks but in case you want to train a model from scratch on your dataset, you can load the model without the imagenet weights. Or better, load the weights but don't freeze any layers. This will retrain every layer taking imagenet weights as an initialization.
I hope I've answered your question.
I am trying to use pre-trained InceptionV3 model. However, I want to remove initial five layers and add my custom layers. How can I do that? I tried model.layers.pop(0), but that alone will not solve the problem.
Edit:
tf.keras does not help either as mentioned in the first answer:
model.layers.pop() doesn't work in the same way in tf.keras as it doesn in Keras. In tf.keras, model.layers is a view of the model. You can't remove the layers but what you can do is define the layer for which you want the output. For example,
base_model = InceptionV3(shape=shape, weights="imagenet", include_top=True)
# you don't want the last five layers:
base_model_output = base_model.layers[-6].output
# new layers
outputs = Dense(....)(base_model_output)
model = Model(base_model.input, outputs)
Since the first few layers starting from the input are changed, then the pretrained weights cannot be used. So, the architecture can be directly taken from here and modified accordingly instead of trying complex surgeries.
https://github.com/keras-team/keras-applications/blob/master/keras_applications/inception_v3.py
I trained a model with several layers than for each layer in model.layers set
layer.trainable = False
I added several layers to this model, called
model.compile(...)
And trained this new model for several epochs with part of the layers frozen.
Later I decided to unfreeze layers and ran
for layer in model.layers:
layer.trainable = True
model.compile(...)
When I start learning the model with unfrozen layers I get loss function value very high even though I just wanted to continue training from previously learned weights. I also checked that after model.compile(...) model still predicts well (not resetting previously learned weights) but as soon as learning process starts everything gets 'erased' and I start as from scratch.
Could someone clarify, whether this behavior is ok? How to recompile the model and not start from scratch?
P.S. I also asked manually saving weights and assigning them back to a newly compiled model using layer.get_weights() and layer.set_weights()
I used the same compile parameters (similar optimizer and similar loss)
You might need to lower your learning rate while starting fine-tuning the trained layers. For example, a learning rate of 0.01 might work for your new dense layers (top) with all others layers set to untrainable. But when setting all layers to be trainable, you might need to reduce the learning rate to say 0.001 There is no need to manually copy or set weights.
I am trying to train a resnet network using keras backend in tensorflow. The feed dictionary for each batch update is written as:
feed_dict= {x:X_train[indices[start:end]], y:Y_train[indices[start:end]], keras.backend.learning_phase():1}
I am using keras backend (keras.backend.set_session(sess)) because the original resnet network is defined with keras. As the model contains dropout and batch_norm layers, it requires a learning phase to distinct between training and testing.
I observe that whenever I set keras.backend.learning_phase():1, the model train/test accuracy hardly increase above 10%. In contrast, if the learning phase is not set i.e., the feed dictionary is defined as:
feed_dict= {x:X_train[indices[start:end]], y:Y_train[indices[start:end]]}
Then as expected, the model accuracy keeps in increasing with epochs in a standard way.
I would appreciate if someone clarifies whether the use of learning phase is not necessary or if something else is wrong. Keras 2.0 documentation seems to suggest using learning phase with dropout and batch_norm layers.
set the learning phase to 1 (training)
K.set_learning_phase(1)
Then you need to set the training=false for all batch normalization layers
if layer.name.startswith('bn'):
layer.call(layer.input, training=False)
I trained a model with Normalization layer. The code is as this:
In training phase:
model=Sequential()
model.add()
...
k.set_learning_phase(1)
ModelCheckpoint(weights_file)
model.fit()
In inference time:
k.set_learning_phase(0)
model.load_weights(weights_file)
model.predict_classes()
...
The version of Keras:2.0.8. Is that right,or need some special codes to compute the BN after training like using SegNet in Caffe?
No, you don't need to do anything special when using BatchNormalization or Dropout layers. Keras already tracks the learning/testing phases, so when using predict or predict_classes, it does the right thing.
You do not even need to set the learning phase manually, Keras already does it.