How to remove (pop) initial layers of Keras InceptionV3 pre-trained model? - python

I am trying to use pre-trained InceptionV3 model. However, I want to remove initial five layers and add my custom layers. How can I do that? I tried model.layers.pop(0), but that alone will not solve the problem.
Edit:
tf.keras does not help either as mentioned in the first answer:

model.layers.pop() doesn't work in the same way in tf.keras as it doesn in Keras. In tf.keras, model.layers is a view of the model. You can't remove the layers but what you can do is define the layer for which you want the output. For example,
base_model = InceptionV3(shape=shape, weights="imagenet", include_top=True)
# you don't want the last five layers:
base_model_output = base_model.layers[-6].output
# new layers
outputs = Dense(....)(base_model_output)
model = Model(base_model.input, outputs)

Since the first few layers starting from the input are changed, then the pretrained weights cannot be used. So, the architecture can be directly taken from here and modified accordingly instead of trying complex surgeries.
https://github.com/keras-team/keras-applications/blob/master/keras_applications/inception_v3.py

Related

Transfer learning or fine-tuning

Please add some context to explain the code sections (or check that you have not incorrectly formatted all of your question as code)
include_top=False,
weights='imagenet')
x = base_model.output
x = tf.keras.layers.GlobalAveragePooling2D()(x)
x = se_model.input,outputs=prediction_layer)
return model
When you download the base model, it downloads its architecture along with its weights. It's not possible to train these high-level models from single PC's. That's why we download the pre-trained models from the internet and then fine tune them by re-training them with our own dataset.
# Fine-tune from this layer onwards
for layer in model.layers[:fine_tune_at]:
layer.trainable=False
for layer in model.layers[fine_tune_at:]:
layer.trainable=True
As you can see above, you are freezing weights of pre-trained models' initial layers as you don't want to mess up with that and only need to change the last output layer based on your I/O.

How to load the weights of the first few layers of Convolutional Neural Network in Keras and delete the pre-trained model?

I have a pretrained model trained in Keras.
I am trying to use that model in another task, but I don't need all the layers, but only the first 4 conv layers.
I have the model saved in "keras_pretrained_model.h5"
Is it possible to initialize the first 4 conv layers of the new model using the weights of the first 4 conv layers of the pretrained model from the '.h5' file?
Is loading the whole pretrained model first always necessary??
The pretrained model actually takes up a lot of space and I am not sure how to delete the pretrained model after I initializing the new model with the weights. As far as I understand, using tf.keras.backend.clear_session() will clear the new model created along with the old one.
So, my question is
Is there any way to initialize the weights in the new model layers without loading the whole pretrained model?
If I have to load the whole pretrained model, how to delete only the pretrained model without harming the new model in any way?
I have thought of two processes
If the names of the layers of the first 4 layers of the new model is same as the 4 layers in the pre-trained model, then
new_model.load_weights(path_to_old_model_file, by_name = True)
If the names don't mathc, then we can do layer wise weight initialization by taking the weights from the corresponding layers in the old model h5 file and setting the weights using set_weights() method.
I have written a code, which I have uploaded to github, here.
I would be very grateful, if anyone seeing this gives a feedback on this!!

How to use Deep Learning Models from Keras for a problem that does not fit imagenet dataset?

I followed a blog on how to implement a vgg16-model from scratch and want to do the same with the pretrained model from Keras. I looked up some other blogs but can't find a fitting solution I think. My task is to classify integrated circuit images into defect or non defects.
I have seen on a paper that they used pretrained imagenet model of vgg16 for fabric defect detection, where they freezed the first seven layers and fine tuned the last nine for their own problem.
(Source: https://journals.sagepub.com/doi/full/10.1177/1558925019897396)
I have already seen examples on how to freeze all layers except the fully connected layers, but how can I try the example with freezing first x layers and fine tune the others for my problem?
The VGG16 is fairly easy to implement from scratch but for models like resnet or xception it is getting a little trickier.
It is not necessary to implement a model from scratch to freeze a few layers. You can do this on pre-trained models as well. In keras, you'd use trainable = False.
For example, let's say you want to use the pre-trained Xception model from keras and want to freeze the first x layers:
#In your includes
from keras.applications import Xception
#Since you're using the model for a different task, you'd want to remove the top
base_model = Xception(weights='imagenet', include_top=False)
#Freeze layers 0 to x
for layer in base_model.layers[0:x]:
layer.trainable = False
#To see all the layers in detail and to check trainable parameters
base_model.summary()
Ideally you'd want to add another layer on top of this model with the output as your classes. For more details, you can check this keras guide: https://keras.io/guides/transfer_learning/
A lot of times the pre-trained weights can be very useful in other classification tasks but in case you want to train a model from scratch on your dataset, you can load the model without the imagenet weights. Or better, load the weights but don't freeze any layers. This will retrain every layer taking imagenet weights as an initialization.
I hope I've answered your question.

How do I add layers at the start of a model in Keras?

I want to add new layers to a pre-trained model, using Tensorflow and Keras. The problem is, those new layers are not to be added on top of the model, but at the start. I want to create a triple-siamese model, which takes 3 different inputs and gives 3 different outputs, using the pre-trained network as the core of the model. For that, I need to insert 3 new input layers at the beginning of the model.
The default path would be to just chain the layers and the model, but this method treats the pre-trained model as a new layer (when a new model with the new inputs and the pre-trained model is created, the new model only contains4 layers, the 3 input layers, and the whole pre-trained model):
input_1 = tf.keras.layers.Input(shape = (224,224,3))
input_2 = tf.keras.layers.Input(shape = (224,224,3))
input_3 = tf.keras.layers.Input(shape = (224,224,3))
output_1 = pre_trained_model(input_1)
output_2 = pre_trained_model(input_2)
output_3 = pre_trained_model(input_3)
new_model = tf.keras.Model([input_1, input_2, input_3], [output_1, output_2, output_3])
new_model has only 4 layers, due to the Keras API considering the pre_trained_model a layer.
I know that the above option works, as I have seen in many code samples, but I wonder if there is a better option for this. It feels awkward for me, because the access to inner layers of the final model will be messed up, not to mention the fact that the model will have an extra input layer after the added 3 input layers (the input layer from the pre trained model is still intact, and is totally unnecessary).
No, this does not add layers, you are making a multi-input multi-output model, where each siamese branch shares weights. There is no other API in Keras to do this, so this is your only option.
And you can always access the layers of the inner model through the pre_trained_model variable, so there is nothing lost.

Keras: Is there any way to "pop()" the top layers?

In Keras there is a feature called pop() that lets you remove the bottom layer of a model. Is there any way to remove the top layer of a model?
I have a fully saved pre-trained Variational Autoencoder and am trying to only load the decoder (the bottom four layers).
I am using Keras with a Tensorflow backend.
Keras pop() removes the last (aka top) layer, not the bottom one.
I suggest you use model.summary() to print out the list of layers and than subsequently use pop() until only the necessary layers are left.
pop(0) works for me
from keras.applications import vgg16
vgg = vgg16.VGG16(include_top=False, input_shape=(604,604,3))
vgg.summary()
vgg.layers.pop(0)
vgg.summary()
vgg.layers.pop()
vgg.summary()

Categories

Resources