Callback for Keras evaluate_generator - python

I am using Keras for one of my experiments. When I use fit_generator method I can specify the callbacks so that I can implement after each batch or epoch.
Now, while using evaluate_generator for validation I do following,
One of some other metrics is like below,
def accuracy(y_true, y_pred):
return K.mean(K.equal(y_true, K.round(y_pred)))
The evaluation,
metrics = model.evaluate_generator(my_generator(...),
steps=steps,
use_multiprocessing=True)
Here, my_generator() yield a single input (here batch size is 1). Also, I have multiple types of losses defined in the model. I get all of those losses perfectly.
But the problem is, I get only one evaluation metrics. I think it's the overall metric considering all the single batches as one whole input.
How can I define a callback or anything like that so that I can do my own calculations on the single batch evaluations? (like a fit_generator callback).
Note: evaluate_generator does not support callbacks.

I think you are looking for a lambda callback
https://keras.io/callbacks/#lambdacallback
It is also possible to use a complete custom callback.
Example from keras documentation:
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.losses = []
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))

Related

TensorFlow Keras: print out and save loss and gradients during model.fit

I'm training neural networks in TensorFlow Keras by using basic code like this:
model.fit(x_train, y_train, epochs=5)
Is there a way to print out and also save the loss function value, the gradients, and norm of the gradients, for each epoch of model.fit?
Thanks.
In order to print and save variables after each epoch during training, you can use Callbacks. You may write your own callback or use built-in callbacks.
For example of a built-in callback, CSVLogger helps you to store each epoch results in a CSV file.
Also you may use ModelCheckpoint in order to save weights after each epoch in checkpoints.
If you want to print gradients after each epoch, you have two possibilities.
Either write a custom training and use tf.GradientTape() to record operations for automatic differentiation, and then tape.gradient() function to compute gradients for you. See this link for more information.
Or if you want to use model.fit(), you again should write a custom callback and then print variables in model like this: print(model.trainable_variables).
Here is an example of custom callback:
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
print(model.trainable_variables)
my_callback = myCallback()
model.fit(x_train, y_train, epochs=5, callbacks=[my_callback])

How to train deep neural network with custom loss

I am interested in how to train deep neural network with custom loss-function. I have seen posts on stack overflow but they aren't answered. I have downloaded VGG16 and froze weights and added my own head. Now I want to train that network with custom loss, how can I do that?
Here is a custom RMSE loss in PyTorch. I hope this gives you a concrete idea of how to implement a custom loss function. You must create a class that inherits nn.Module, define the initialization and forward pass.
class RMSELoss(nn.Module):
def __init__(self, eps=1e-9):
super().__init__()
self.mse = nn.MSELoss()
self.eps = eps
def forward(self,yhat,y):
loss = torch.sqrt(self.mse(yhat,y) + self.eps)
return loss
You can simply define a function with two input parameters(true value, predicted value). Then you can calculate the loss using those values by your very own method.
Here is the coding sample:
def custom_loss( y_true , y_pred ):
tf.losses.mean_squared_error( y_true , y_pred )
I have used mse from tf backend in this example. But you can use manual calculation here.
Compile your model with this loss function.
model.compile(
optimizer=your_optimizer,
loss=custom_loss
)
You can also define your own customized metric to judge during the training.
def custom_metric( y_true , y_pred ):
return calculate_your_metric( y_true , y_pred )
Finally, compile with it,
model.compile(
optimizer=your_optimizer,
loss=custom_loss,
metrics=[ custom_metric ]
)
There are several examples and repositories showing how to implement perceptual loss which sounds like what you are referring to. Of course, you can generalize and learn from some of these approaches to different models depending on your problem. If you do so, I recommend writing about it and sharing. I don't see many examples other than using some pretrained vgg model, and breaking that mold might be a nice contribution! Anyway, you might find these other answers useful:
Implement perceptual loss with pretrained VGG using keras
VGG, perceptual loss in keras

Universal Tensorflow Wrapper for Model training

I want to build a tensorflow wrapper to train model. The idea is that you can define your model in a function, pass it to object/wrapper and it will do the rest. So you don't have to code everything from the beginning every time. I will make it clear with some pseudocode
def model():
//Define your tf graph/structure here
return output
And then you will have a class, which you can pass your model, training data, valid data into it
class tf_wrapper():
def __init___(model,training_data,valid_data):
//init stuffs
def train():
//code to train the model
The train code should look like some standard one in many tutorial:
for i in range(epochs):
sess.run(feed_dict{placeholder_X: batch_X, placeholder_Y: batchY)
What I struggle with right now is, there is different kind of model structure, loss function, input pipeline ... for example: loss function for classification task is different from regression (crossmax entropy vs MSE) also the calculation of accuracy, or the way you input data of CNN is different from RNN. What is the best way to solve this problem ?

How to write callbacks to get predictions from fit_generator() in Keras?

There is no doubt that we can define our own callbacks to get the loss of the batch from fit_generator in keras as follow:
class LossHistory(Callback):
def on_train_begin(self, logs={}):
self.losses = []
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
I wonder if there is a convienient way to get the predictions of each samples in batch like the way to get the loss of each batch.
I have searched the solution for a long time but no hints about it, thanks so much for your help!

Python/Keras - Creating a callback with one prediction for each epoch

I'm using Keras to predict a time series. As standard I'm using 20 epochs. I want to know what did my neural network predict for each one of the 20 epochs.
By using model.predict I'm getting only one prediction among all epochs (not sure how Keras select it). I want all predictions, or at least the 10 best.
According to a previous answer I got, I should compute the predictions after each training epoch by implementing an appropriate callback by subclassing Callback() and calling predict on the model inside the on_epoch_end function.
Well, the theory seems in shape but I'm in trouble to code that. Would anyone be able to give a code example on that?
Not sure how to implement the Callback() subclassing and neither how to mix that with the model.predict inside an on_epoch_end function.
Your help will be highly appreciated :)
EDIT
Well, I evolved a little bit.
Found out how to create the subclass and how to link it to the model.predict.
However, I'm burning my brain on how to create a list with all the predictions. Below is my current code:
#Creating a Callback subclass that stores each epoch prediction
class prediction_history(Callback):
def on_epoch_end(self, epoch, logs={}):
self.predhis=(model.predict(predictor_train))
#Calling the subclass
predictions=prediction_history()
#Executing the model.fit of the neural network
model.fit(X=predictor_train, y=target_train, nb_epoch=2, batch_size=batch,validation_split=0.1,callbacks=[predictions])
#Printing the prediction history
print predictions.predhis
However, all I'm getting with that is a list of predictions of the last epoch (same effect as printing model.predict(predictor_train)).
The question now is: How do I adapt my code so it adds to predhis the predictions of each one of the epochs?
You are overwriting the prediction for each epoch, that is why it doesn't work. I would do it like this:
class prediction_history(Callback):
def __init__(self):
self.predhis = []
def on_epoch_end(self, epoch, logs={}):
self.predhis.append(model.predict(predictor_train))
This way self.predhis is now a list and each prediction is appended to the list at the end of each epoch.

Categories

Resources