In Keras there is a feature called pop() that lets you remove the bottom layer of a model. Is there any way to remove the top layer of a model?
I have a fully saved pre-trained Variational Autoencoder and am trying to only load the decoder (the bottom four layers).
I am using Keras with a Tensorflow backend.
Keras pop() removes the last (aka top) layer, not the bottom one.
I suggest you use model.summary() to print out the list of layers and than subsequently use pop() until only the necessary layers are left.
pop(0) works for me
from keras.applications import vgg16
vgg = vgg16.VGG16(include_top=False, input_shape=(604,604,3))
vgg.summary()
vgg.layers.pop(0)
vgg.summary()
vgg.layers.pop()
vgg.summary()
Related
I followed a blog on how to implement a vgg16-model from scratch and want to do the same with the pretrained model from Keras. I looked up some other blogs but can't find a fitting solution I think. My task is to classify integrated circuit images into defect or non defects.
I have seen on a paper that they used pretrained imagenet model of vgg16 for fabric defect detection, where they freezed the first seven layers and fine tuned the last nine for their own problem.
(Source: https://journals.sagepub.com/doi/full/10.1177/1558925019897396)
I have already seen examples on how to freeze all layers except the fully connected layers, but how can I try the example with freezing first x layers and fine tune the others for my problem?
The VGG16 is fairly easy to implement from scratch but for models like resnet or xception it is getting a little trickier.
It is not necessary to implement a model from scratch to freeze a few layers. You can do this on pre-trained models as well. In keras, you'd use trainable = False.
For example, let's say you want to use the pre-trained Xception model from keras and want to freeze the first x layers:
#In your includes
from keras.applications import Xception
#Since you're using the model for a different task, you'd want to remove the top
base_model = Xception(weights='imagenet', include_top=False)
#Freeze layers 0 to x
for layer in base_model.layers[0:x]:
layer.trainable = False
#To see all the layers in detail and to check trainable parameters
base_model.summary()
Ideally you'd want to add another layer on top of this model with the output as your classes. For more details, you can check this keras guide: https://keras.io/guides/transfer_learning/
A lot of times the pre-trained weights can be very useful in other classification tasks but in case you want to train a model from scratch on your dataset, you can load the model without the imagenet weights. Or better, load the weights but don't freeze any layers. This will retrain every layer taking imagenet weights as an initialization.
I hope I've answered your question.
I am creating a model somewhat similar to the one mentioned below:
model
I am using Keras to create such model but have struck a dead end as I have not been able find a way to add SoftMax to outputs of the LSTM units. So far all the tutorials and helping material provides with information about outputting a single class even like in the case of image captioning as provided in this link.
So is it possible to apply SoftMax to every unit of LSTM (where return sequence is true) or do I have to move to pytorch.
The answer is: yes, it is possible to apply to each unit of LSTM and no, you do not have to move to PyTorch.
While in Keras 1.X you needed to explicitly state that you add a TimeDistributed layer, in Keras 2.X you can just write:
model.add(LSTM(50,activation='relu',return_sequences=False))
model.add(Dense(number_of_classes,activation='softmax'))
I am trying to use pre-trained InceptionV3 model. However, I want to remove initial five layers and add my custom layers. How can I do that? I tried model.layers.pop(0), but that alone will not solve the problem.
Edit:
tf.keras does not help either as mentioned in the first answer:
model.layers.pop() doesn't work in the same way in tf.keras as it doesn in Keras. In tf.keras, model.layers is a view of the model. You can't remove the layers but what you can do is define the layer for which you want the output. For example,
base_model = InceptionV3(shape=shape, weights="imagenet", include_top=True)
# you don't want the last five layers:
base_model_output = base_model.layers[-6].output
# new layers
outputs = Dense(....)(base_model_output)
model = Model(base_model.input, outputs)
Since the first few layers starting from the input are changed, then the pretrained weights cannot be used. So, the architecture can be directly taken from here and modified accordingly instead of trying complex surgeries.
https://github.com/keras-team/keras-applications/blob/master/keras_applications/inception_v3.py
I am working on a project that requires me to add new units to the output layer of a neural network to implement a form of transfer learning. I was wondering if I could do this and set the units' weights using either Keras or TensorFlow.
Specifically I would like to append an output neuron to the output layer of the Keras model and set that neuron's initial weights and bias.
Stumbled upon the answer to my own question. Thanks everyone for the answers/comments.
https://keras.io/layers/about-keras-layers/
The first few lines of this source detail how to load and set weights.
Essentially, appending an output neuron to a Keras model can be accomplished by loading the old output layer, appending the new weights, and setting weights for a new layer. Code is below.
# Load weights of previous output layer, set weights for new layer
old_layer_weights = model.layers.pop().get_weights()
new_neuron_weights = np.ndarray(shape=[1,bottleneck_size])
# Set new weights
# Append new weights, add new layer
new_layer = Dense(num_classes).set_weights(np.append(old_layer_weights,new_neuron_weights))
model.add(new_layer)
You could add new units to the output layer of a pre-trained neural network. This form of transfer learning is said to be called using the bottleneck features of a pre-trained network. This could be implemented both in tensorflow as well as in Keras.
Please find the tutorial in Keras below:
https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
Also, find the tutorial for tensorflow below:
https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/08_Transfer_Learning.ipynb
Hope this helps!
I'm using Keras to do the modelling works and I wonder is it possible to remove certain layers by index or name? Currently I only know the model.pop() could do this work but it just removes the most recently added layers. In addition, layers is the type of tensorvariable and I have no idea how to remove certain element which can be done in numpy array or list. BTW I'm using Theano backend.
It is correct that model.pop() just removes the last added layer and there is no other documented way to delete intermediate layers.
You can always get the output of any intermediate layer like so:
base_model = VGG19(weights='imagenet')
model = Model(inputs=base_model.input, outputs=base_model.get_layer('block4_pool').output)
Example taken from here: https://keras.io/applications/
Than add your new layers on top of that.