I am replicating a paper. I have a basic Keras CNN model for MNIST classification. Now for sample z in the training, I want to calculate the hessian matrix of the model parameters with respect to the loss of that sample. I want to average out this hessian over the training data (n is number of training data).
My final goal is to calculate this value (the influence score):
I can calculate the left term and the right term and want to compute the Hessian term. I don't know how to calculate hessian for the model weights for a batch of examples (vectorization). I was able to calculate it only for a sample at a time which is too slow.
x=tf.convert_to_tensor(x_train[0:13])
with tf.GradientTape() as t2:
with tf.GradientTape() as t1:
y=model(x)
mce = tf.keras.losses.CategoricalCrossentropy()
y_expanded=y_train[train_idx]
loss=mce(y_expanded,y)
g = t1.gradient(loss, model.weights[4])
h = t2.jacobian(g, model.weights[4])
print(h.shape)
For clarification, if a model layer is of dimension 20*30, I want to feed a batch of 13 samples to it and get a Hessian of dimension (13,20,30,20,30). Now I can only get Hessian of dimension (20,30,20,30) which thwarts the vectorization (the code above).
This thread has the same problem, except that I want the second-order derivative rather than the first-order.
I also tried the below script which returns a (13,20,30,20,30) matrix that satisfies the dimension, but when I manually checked the sum of this matrix with the sum of 13 single hessian calculations with a for loop from 0 to 12, they lead to different numbers so it does not work either since I expected equal values.
x=tf.convert_to_tensor(x_train[0:13])
mce = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
with tf.GradientTape() as t2:
with tf.GradientTape() as t1:
t1.watch(model.weights[4])
y_expanded=y_train[0:13]
y=model(x)
loss=mce(y_expanded,y)
j1=t1.jacobian(loss, model.weights[4])
j3 = t2.jacobian(j1, model.weights[4])
print(j3.shape)
That's how hessians are defined, you can only calculate a hessian of a scalar function.
But nothing new here, the same happens with gradients, and what is done to handle batches is to accumulate the gradients, something similar can be done with the hessian.
If you know how to compute the hessian of the loss, it means you could define batch cost and still be able to compute the hessian with the same method. e.g. you could define your cost as the sum(losses) where losses is the vector of losses for all examples in the batch.
Let's Suppose you have a model and you wanna train the model weights by taking the Hessian of the training images w.r.t trainable-weights
#Import the libraries we need
import tensorflow as tf
from tensorflow.python.eager import forwardprop
model = tf.keras.models.load_model('model.h5')
#Define the Adam Optimizer
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.98,
epsilon=1e-9)
#Define the loss function
def loss_function(y_true , y_pred):
return tf.keras.losses.sparse_categorical_crossentropy(y_true , y_pred , from_logits=True)
#Define the Accuracy metric function
def accuracy_function(y_true , y_pred):
return tf.keras.metrics.sparse_categorical_accuracy(y_true , y_pred)
Now, define the variables for storing the mean of the loss and accuracy
train_loss = tf.keras.metrics.Mean(name='loss')
train_accuracy = tf.keras.metrics.Mean(name='accuracy')
#Now compute the Hessian in some different style for better efficiency of the model
vector = [tf.ones_like(v) for v in model.trainable_variables]
def _forward_over_back_hvp(images, labels):
with forwardprop.ForwardAccumulator(model.trainable_variables, vector) as acc:
with tf.GradientTape() as grad_tape:
logits = model(images, training=True)
loss = loss_function(labels ,logits)
grads = grad_tape.gradient(loss, model.trainable_variables)
hessian = acc.jvp(grads)
optimizer.apply_gradients(zip(hessian, model.trainable_variables))
train_loss(loss) #keep adding the loss
train_accuracy(accuracy_function(labels, logits)) #Keep adding the accuracy
#Now, here we need to call the function and train it
import time
for epoch in range(20):
start = time.time()
train_loss.reset_states()
train_accuracy.reset_states()
for i,(x , y) in enumerate(dataset):
_forward_over_back_hvp(x , y)
if(i%50==0):
print(f'Epoch {epoch + 1} Loss {train_loss.result():.4f} Accuracy {train_accuracy.result():.4f}')
print(f'Time taken for 1 epoch: {time.time() - start:.2f} secs\n')
Epoch 1 Loss 2.6396 Accuracy 0.1250
Time is taken for 1 epoch: 0.23 secs
Related
In TF2 keras, I have trained an Autoencoder using tensorflow.keras.losses.MeanSquaredError as loss function. Now, I want to further train this model by using another loss function, specifically tensorflow.keras.losses.KLDivergence. The reason for this is that initially unsupervised learning is conducted for representation learning. Then, having the generated embeddings, I can cluster them and use these clusters for self-supervision, i.e. labels, enabling the second, supervised loss and improving the model further.
This is not transfer learning per se, as no new layers are added to the model, just the loss function is changed and the model continues training.
What I have tried is using the pretrained model with the MSE loss as the new model's property:
class ClusterBooster(tf.keras.Model):
def __init__(self, base_model, centers):
super(ClusterBooster, self).__init__()
self.pretrained = base_model
self.centers = centers
def train_step(self, data):
with tf.GradientTape() as tape:
loss = self.compiled_loss(self.P, self.Q, regularization_losses=self.losses)
# Compute gradients
gradients = tape.gradient(loss, self.trainable_variables)
# Update weights
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
return {m.name: m.result() for m in self.metrics}
where the loss is the KL loss between distributions P and Q. The distributions are computed in a callback function instead of the model train_step as I need access to the current epoch (P is updated every 5 epochs, not on each epoch):
def on_epoch_begin(self, epoch, logs=None):
z = self.model.pretrained.embed(self.feature, training=True)
z = tf.reshape(z, [tf.shape(z)[0], 1, tf.shape(z)[1]]) # reshape for broadcasting
# CALCULATE Q FOR EVERY EPOCH
partial = tf.math.pow(tf.norm(z - self.model.centers, axis=2, ord='euclidean'), 2)
nominator = 1 / (1 + partial)
denominator = tf.math.reduce_sum(1 / (1 + partial))
self.model.Q = nominator / denominator
# CALCULATE P EVERY 5 EPOCHS TO AVOID INSTABILITY
if epoch % 5 == 0:
partial = tf.math.pow(self.model.Q, 2) / tf.math.reduce_sum(self.model.Q, axis=1, keepdims=True)
nominator = partial
denominator = tf.math.reduce_sum(partial, axis=0)
self.model.P = nominator / denominator
However, when apply_gradients() is executed I get:
ValueError: No gradients provided for any variable: ['dense/kernel:0', 'dense/bias:0', 'dense_1/kernel:0', 'dense_1/bias:0', 'dense_2/kernel:0', 'dense_2/bias:0', 'dense_3/kernel:0', 'dense_3/bias:0']
I think that this is due to the fact that the pretrained model is not set to be further trained somewhere inside the new model (only the embed() method is called, which does not train the model). Is this a correct approach and I am just missing something or is there a better way?
It seems that whatever computation takes place in a callback, isn't tracked for gradient computation and weight updating. Thus, these computations should be put inside the train_step() function of the custom Model class (ClusterBooster).
Providing that I don't have access to the number of epochs inside the train_step() function of ClusterBooster, I created a custom training loop without a Model class, where I could use plain python code (which is computed eagerly).
I am working on epilepsy seizure prediction. I have imbalanced dataset I want to make it balanced by using focal loss. I have 2 classes one-hot encoding vector. I found the below focal loss code but I don't know how I can get y_pred to be used in focal loss code before model.fit_generator .
y_pred is the output of the model. So how I can use it in the focal loss code before fitting my model??
focal loss code:
def categorical_focal_loss(gamma=2.0, alpha=0.25):
"""
Implementation of Focal Loss from the paper in multiclass classification
Formula:
loss = -alpha*((1-p)^gamma)*log(p)
Parameters:
alpha -- the same as wighting factor in balanced cross entropy
gamma -- focusing parameter for modulating factor (1-p)
Default value:
gamma -- 2.0 as mentioned in the paper
alpha -- 0.25 as mentioned in the paper
"""
def focal_loss(y_true, y_pred):
# Define epsilon so that the backpropagation will not result in NaN
# for 0 divisor case
epsilon = K.epsilon()
# Add the epsilon to prediction value
#y_pred = y_pred + epsilon
# Clip the prediction value
y_pred = K.clip(y_pred, epsilon, 1.0-epsilon)
# Calculate cross entropy
cross_entropy = -y_true*K.log(y_pred)
# Calculate weight that consists of modulating factor and weighting factor
weight = alpha * y_true * K.pow((1-y_pred), gamma)
# Calculate focal loss
loss = weight * cross_entropy
# Sum the losses in mini_batch
loss = K.sum(loss, axis=1)
return loss
return focal_loss
My code:
history=model.fit_generator(generate_arrays_for_training(indexPat, train_data, start=0,end=100)
validation_data=generate_arrays_for_training(indexPat, test_data, start=0,end=100)
steps_per_epoch=int((len(train_data)/2)),
validation_steps=int((len(test_data)/2)),
verbose=2,epochs=65, max_queue_size=2, shuffle=True)
preictPrediction=model.predict_generator(generate_arrays_for_predict(indexPat, filesPath_data), max_queue_size=4, steps=len(filesPath_data))
y_pred1=np.argmax(preictPrediction,axis=1)
y_pred=list(y_pred1)
From the comment section for the benefit of the community.
This is not specific to focal loss, all keras loss functions take
y_true and y_pred, you do not need to worry where those parameters are
coming from, they are fed by keras automatically.
I'm trying to implement linear classifier in PyTorch, using 1 layer with tensors W and b, softmax and cross entropy loss. For each batch I have to:
Calculate logits
Transform logits to probabilities with softmax
Compute most probable classes
Calculate cross entropy between true and predicted classes
Use an optimizer to change W and b
So far I have (I have flat MNIST loaded with Scikit-learn):
# convert Numpy arrays to PyTorch tensor Variables
input_X_train = torch.from_numpy(X_train_flat).float().to(device)
input_X_val = torch.from_numpy(X_val_flat).float().to(device)
input_X_test = torch.from_numpy(X_test_flat).float().to(device)
input_y_train = torch.from_numpy(y_train).long().to(device)
input_y_val = torch.from_numpy(y_val).long().to(device)
input_y_test = torch.from_numpy(y_test).long().to(device)
# model parameters: W and b
W = torch.randn(input_dim, output_dim, device=device, dtype=dtype, requires_grad=True)
b = torch.randn(1, device=device, dtype=dtype, requires_grad=True)
BATCH_SIZE = 512
EPOCHS = 40
LEARNING_RATE = 1e-6
# create torch.optim.Adam optimizer for loss function minimization
optimizer = torch.optim.Adam([W, b], lr=LEARNING_RATE)
# create negative log loss function object for loss function evaluation
# use mean loss value from all batch samples
loss_fn = torch.nn.NLLLoss(reduction="mean")
for t in range(EPOCHS):
# logits for input_X, resulting shape should be [input_X.shape[0], 10]
logits = torch.matmul(input_X_train, W) + b
# apply torch.nn.functional.softmax (torch_F.softmax) to logits
probas = torch_f.softmax(logits, dim=1)
# apply torch.argmax to find a class index with highest probability
classes = torch.argmax(probas, dim=1)
# loss should be a scalar number: average loss over all the objects with torch.mean()
# PyTorch implements negative log loss (NLL) *without* log - you have to first compute log of
# softmax, then negative log loss, which will swap sign
# Use torch.nn.functional.log_softmax (torch_f.log_softmax) on top of input_y and logits
# It is identical to calculating cross-entropy (log and then NLL) on top of probas,
# but is more numerically friendly (read the docs).
log_probas = torch_f.log_softmax(logits, dim=1)
loss = loss_fn(log_probas, input_y_train)
# Before the backward pass, use the optimizer object to zero all of the
# gradients for the variables it will update (which are the learnable
# weights of the model). This is because by default, gradients are
# accumulated in buffers( i.e, not overwritten) whenever .backward()
# is called. Checkout docs of torch.autograd.backward for more details.
optimizer.zero_grad()
# calculate backward gradients for backpropagation
loss.backward()
# Calling the step function on an Optimizer makes an update to its parameters
optimizer.step()
For some reason, the W and b don't change. What am I doing wrong?
EDIT:
I've seen and tried in the code above e. g. this minimal working example https://discuss.pytorch.org/t/minimal-working-example-of-optim-sgd/11623/2.
EDIT 2:
Gradients W.grad are often, I think it should not be like that. Probabilities of classes are definitely right (so it's not e. g. like this example), since I've checked sum of every row and probabilities of all classes for each sample sum to 1.
I would like to perform transfer learning with pretrained model of keras
import tensorflow as tf
from tensorflow import keras
base_model = keras.applications.MobileNetV2(input_shape=(96, 96, 3), include_top=False, pooling='avg')
x = base_model.outputs[0]
outputs = layers.Dense(10, activation=tf.nn.softmax)(x)
model = keras.Model(inputs=base_model.inputs, outputs=outputs)
Training with keras compile/fit functions can converge
model.compile(optimizer=keras.optimizers.Adam(), loss=keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy'])
history = model.fit(train_data, epochs=1)
The results are: loss: 0.4402 - accuracy: 0.8548
I wanna train with tf.GradientTape, but it can't converge
optimizer = keras.optimizers.Adam()
train_loss = keras.metrics.Mean()
train_acc = keras.metrics.SparseCategoricalAccuracy()
def train_step(data, labels):
with tf.GradientTape() as gt:
pred = model(data)
loss = keras.losses.SparseCategoricalCrossentropy()(labels, pred)
grads = gt.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_loss(loss)
train_acc(labels, pred)
for xs, ys in train_data:
train_step(xs, ys)
print('train_loss = {:.3f}, train_acc = {:.3f}'.format(train_loss.result(), train_acc.result()))
But the results are: train_loss = 7.576, train_acc = 0.101
If I only train the last layer by setting
base_model.trainable = False
It converges and the results are: train_loss = 0.525, train_acc = 0.823
What's the problem with the codes? How should I modify it? Thanks
Try RELU as activation function. It may be Vanishing Gradient issue which occurs if you use activation function other than RELU.
Following my comment, the reason why it didn't converge is because you picked a learning rate that was too big. This causes the weight to change too much and the loss to explode. When setting base_model.trainable to False, most of the weight in the networks were fixed and the learning rate was a good fit for your last layers. Here's a picture :
As a general rule, your learning rate should always be chosen for each experiments.
Edit : Following Wilson's comment, I'm not sure this is the reason you have different results but this could be it :
When you specify your loss, your loss is computed on each element of the batch, then to get the loss of the batch, you can take the sum or the mean of the losses, depending on which one you chose, you get a different magnitude. For example, if your batch size is 64, summing the loss will yield you a 64 times bigger loss which will yield 64 times bigger gradient, so choosing sum over mean with a batch size 64 is like picking a 64 times bigger learning rate.
So maybe the reason you have different results is that by default a keras.losses wrapped in a model.compile has a different reduction method. In the same vein, if the loss is reduced by a sum method, the magnitude of the loss depends on the batch size, if you have twice the batch size, you get (on average) twice the loss, and twice the gradient and so it's like doubling the learning rate.
My advice is to check the reduction method used by the loss to be sure it's the same in both case, and if it's sum, to check that the batch size is the same. I would advise to use mean reduction in general since it's not influenced by batch size.
When reading an tensorflow implementation for a deep learning model, I am trying to understand the following code segment included in the training process.
self.net.gradients_node = tf.gradients(loss, self.variables)
for epoch in range(epochs):
total_loss = 0
for step in range((epoch*training_iters), ((epoch+1)*training_iters)):
batch_x, batch_y = data_provider(self.batch_size)
# Run optimization op (backprop)
_, loss, lr, gradients = sess.run((self.optimizer, self.net.cost, self.learning_rate_node, self.net.gradients_node),
feed_dict={self.net.x: batch_x,
self.net.y: util.crop_to_shape(batch_y, pred_shape),
self.net.keep_prob: dropout})
if avg_gradients is None:
avg_gradients = [np.zeros_like(gradient) for gradient in gradients]
for i in range(len(gradients)):
avg_gradients[i] = (avg_gradients[i] * (1.0 - (1.0 / (step+1)))) + (gradients[i] / (step+1))
norm_gradients = [np.linalg.norm(gradient) for gradient in avg_gradients]
self.norm_gradients_node.assign(norm_gradients).eval()
total_loss += loss
I think it is related to mini-batch gradient descent, but I cannot understand how does it work, or I have some difficulties to connect it to the algorithm shown as follows
This is not related to mini batch SGD.
It computes average gradient over all timesteps. After the first timestep avg_gradients will contain the gradient that was just computed, after the second step it will be elementwise mean of the two gradients from the two steps, after n steps it will be elementwise mean of all the n gradients computed so far. These mean gradients are then normalized (so that their norm is one). It is hard to tell why those average gradients are needed without the context in which they were presented.