ELMo - How to train trainable parameters - python

I am new to tensorflow-hub and came across the ELMo model (https://www.tensorflow.org/hub/modules/google/elmo/2).
According to the original paper, the ELMo representation is a weighted average of hidden state activations and these weights are trainable according to the task at hand i.e task specific. As expected, I can see the 4 trainable parameters when I use tf.trainable_variables(). How do I exactly train these variables in tensorflow?
They just mention that these weights are trainable. But who should train it? Me or ELMo model itself trains it? The paper seems to suggest that I should be training it. If so, how do I train it in tensorflow?

You can start off by importing a module into your model with trainable=True, then train the model as you would any other TF model. In the process of this training the model the weight imported a part of the module will be trained as well. You can also use this tutorial as a good starting point as well, and just replace nnlm embedding with ELMo.

Related

duplicated weights in trained tensorflow model

python 3.7, tensorflow 2.9.1
After training deep learning model and running
model.weights
There are duplicated weights for embedding vectors.
Printing summary of it and comparing it to new model (same architecture)
It show same number of parameters in each layer however
number of total params, trainable params at the bottom are different.
Sadly I cannot share model architecture.

How to use Deep Learning Models from Keras for a problem that does not fit imagenet dataset?

I followed a blog on how to implement a vgg16-model from scratch and want to do the same with the pretrained model from Keras. I looked up some other blogs but can't find a fitting solution I think. My task is to classify integrated circuit images into defect or non defects.
I have seen on a paper that they used pretrained imagenet model of vgg16 for fabric defect detection, where they freezed the first seven layers and fine tuned the last nine for their own problem.
(Source: https://journals.sagepub.com/doi/full/10.1177/1558925019897396)
I have already seen examples on how to freeze all layers except the fully connected layers, but how can I try the example with freezing first x layers and fine tune the others for my problem?
The VGG16 is fairly easy to implement from scratch but for models like resnet or xception it is getting a little trickier.
It is not necessary to implement a model from scratch to freeze a few layers. You can do this on pre-trained models as well. In keras, you'd use trainable = False.
For example, let's say you want to use the pre-trained Xception model from keras and want to freeze the first x layers:
#In your includes
from keras.applications import Xception
#Since you're using the model for a different task, you'd want to remove the top
base_model = Xception(weights='imagenet', include_top=False)
#Freeze layers 0 to x
for layer in base_model.layers[0:x]:
layer.trainable = False
#To see all the layers in detail and to check trainable parameters
base_model.summary()
Ideally you'd want to add another layer on top of this model with the output as your classes. For more details, you can check this keras guide: https://keras.io/guides/transfer_learning/
A lot of times the pre-trained weights can be very useful in other classification tasks but in case you want to train a model from scratch on your dataset, you can load the model without the imagenet weights. Or better, load the weights but don't freeze any layers. This will retrain every layer taking imagenet weights as an initialization.
I hope I've answered your question.

Updating pre-trained Deep Learning model with respect to new data points

Considering the example of Image classification on ImageNet, How to update the pre-trained model using the new data points.
I have loaded the pre-trained model. I have a new data point that is quite different from the distribution of the original data on which the model was previously trained. So, I would like to update/fine-tune the model with the help of new data point. How to go about doing it? Can anyone help me out in doing it? I am using pytorch 0.4.0 for implementation, running on GPU Tesla K40C.
If you don't want to change the output of the classifier (i.e. the number of classes), then you can simply continue training the model with new example images, assuming that they are reshaped to the same shape that the pretrained model accepts.
On the other hand, if you want to change the number of classes in a pre-trained model, then you can replace the last fully connected layer with a new one and train only this specific layer on new samples. Here's a sample code for this case from PyTorch's autograd mechanics notes:
model = torchvision.models.resnet18(pretrained=True)
for param in model.parameters():
param.requires_grad = False
# Replace the last fully-connected layer
# Parameters of newly constructed modules have requires_grad=True by default
model.fc = nn.Linear(512, 100)
# Optimize only the classifier
optimizer = optim.SGD(model.fc.parameters(), lr=1e-2, momentum=0.9)

Usage of 'learning_phase' in keras for tensorflow backend?

I am trying to train a resnet network using keras backend in tensorflow. The feed dictionary for each batch update is written as:
feed_dict= {x:X_train[indices[start:end]], y:Y_train[indices[start:end]], keras.backend.learning_phase():1}
I am using keras backend (keras.backend.set_session(sess)) because the original resnet network is defined with keras. As the model contains dropout and batch_norm layers, it requires a learning phase to distinct between training and testing.
I observe that whenever I set keras.backend.learning_phase():1, the model train/test accuracy hardly increase above 10%. In contrast, if the learning phase is not set i.e., the feed dictionary is defined as:
feed_dict= {x:X_train[indices[start:end]], y:Y_train[indices[start:end]]}
Then as expected, the model accuracy keeps in increasing with epochs in a standard way.
I would appreciate if someone clarifies whether the use of learning phase is not necessary or if something else is wrong. Keras 2.0 documentation seems to suggest using learning phase with dropout and batch_norm layers.
set the learning phase to 1 (training)
K.set_learning_phase(1)
Then you need to set the training=false for all batch normalization layers
if layer.name.startswith('bn'):
layer.call(layer.input, training=False)

Pytorch Pre-trained RESNET18 Model

I have trained a pre-trained RESNET18 model in pytorch and saved it. While testing the model is giving different accuracy for different mini-batch size. Does anyone know why?
Yes, I think so.
RESNET contains batch normalisation layers. At evaluation time you need to fix these; otherwise the running means are continuously being adjusted after processing each batch hence giving you different accuracy .
Try setting:
model.eval()
before evaluation. Note before getting back into training, call model.train().

Categories

Resources