Universal Tensorflow Wrapper for Model training - python

I want to build a tensorflow wrapper to train model. The idea is that you can define your model in a function, pass it to object/wrapper and it will do the rest. So you don't have to code everything from the beginning every time. I will make it clear with some pseudocode
def model():
//Define your tf graph/structure here
return output
And then you will have a class, which you can pass your model, training data, valid data into it
class tf_wrapper():
def __init___(model,training_data,valid_data):
//init stuffs
def train():
//code to train the model
The train code should look like some standard one in many tutorial:
for i in range(epochs):
sess.run(feed_dict{placeholder_X: batch_X, placeholder_Y: batchY)
What I struggle with right now is, there is different kind of model structure, loss function, input pipeline ... for example: loss function for classification task is different from regression (crossmax entropy vs MSE) also the calculation of accuracy, or the way you input data of CNN is different from RNN. What is the best way to solve this problem ?

Related

How to train deep neural network with custom loss

I am interested in how to train deep neural network with custom loss-function. I have seen posts on stack overflow but they aren't answered. I have downloaded VGG16 and froze weights and added my own head. Now I want to train that network with custom loss, how can I do that?
Here is a custom RMSE loss in PyTorch. I hope this gives you a concrete idea of how to implement a custom loss function. You must create a class that inherits nn.Module, define the initialization and forward pass.
class RMSELoss(nn.Module):
def __init__(self, eps=1e-9):
super().__init__()
self.mse = nn.MSELoss()
self.eps = eps
def forward(self,yhat,y):
loss = torch.sqrt(self.mse(yhat,y) + self.eps)
return loss
You can simply define a function with two input parameters(true value, predicted value). Then you can calculate the loss using those values by your very own method.
Here is the coding sample:
def custom_loss( y_true , y_pred ):
tf.losses.mean_squared_error( y_true , y_pred )
I have used mse from tf backend in this example. But you can use manual calculation here.
Compile your model with this loss function.
model.compile(
optimizer=your_optimizer,
loss=custom_loss
)
You can also define your own customized metric to judge during the training.
def custom_metric( y_true , y_pred ):
return calculate_your_metric( y_true , y_pred )
Finally, compile with it,
model.compile(
optimizer=your_optimizer,
loss=custom_loss,
metrics=[ custom_metric ]
)
There are several examples and repositories showing how to implement perceptual loss which sounds like what you are referring to. Of course, you can generalize and learn from some of these approaches to different models depending on your problem. If you do so, I recommend writing about it and sharing. I don't see many examples other than using some pretrained vgg model, and breaking that mold might be a nice contribution! Anyway, you might find these other answers useful:
Implement perceptual loss with pretrained VGG using keras
VGG, perceptual loss in keras

How can I keep the output of a model over the testing data during training, in Keras?

Keras calculates the output of a model on the testing data at the end of each batch to calculate metrics like validation loss, validation acc, and so on. Is there a way to access the output of the model on the entire dataset that Keras is calculating?
I know that I can calculate it with a callback function that uses model.predict and on_epoch_end to stack all the predictions in a single array, but that would involve evaluating the testing dataset twice for each epoch.
Is there a way to access all the predictions of the model on the testing data?

Callback for Keras evaluate_generator

I am using Keras for one of my experiments. When I use fit_generator method I can specify the callbacks so that I can implement after each batch or epoch.
Now, while using evaluate_generator for validation I do following,
One of some other metrics is like below,
def accuracy(y_true, y_pred):
return K.mean(K.equal(y_true, K.round(y_pred)))
The evaluation,
metrics = model.evaluate_generator(my_generator(...),
steps=steps,
use_multiprocessing=True)
Here, my_generator() yield a single input (here batch size is 1). Also, I have multiple types of losses defined in the model. I get all of those losses perfectly.
But the problem is, I get only one evaluation metrics. I think it's the overall metric considering all the single batches as one whole input.
How can I define a callback or anything like that so that I can do my own calculations on the single batch evaluations? (like a fit_generator callback).
Note: evaluate_generator does not support callbacks.
I think you are looking for a lambda callback
https://keras.io/callbacks/#lambdacallback
It is also possible to use a complete custom callback.
Example from keras documentation:
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.losses = []
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))

TensorFlow FullyConnected Tutorial: How are the trained weights used for Eval and Test?

I've been looking through the TensorFlow FullyConnected tutorial. This also uses the helper code mnist.py
I understand the code but for one nagging piece. After training the Neural Net, the weights obtained from training should be used to evaluate the precision of the model on the Validation (and Test) data. However, I don't see that being done anywhere.
Infact, this is the only thing I see in fully_connected_feed.py
# Evaluate against the validation set.
print('Validation Data Eval:')
do_eval(sess,
eval_correct,
images_placeholder,
labels_placeholder,
data_sets.validation)
# Evaluate against the test set.
print('Test Data Eval:')
do_eval(sess,
eval_correct,
images_placeholder,
labels_placeholder,
data_sets.test)
the do_eval() function seems to be passed a parameter eval_correct which seems to be recalculating the logits again on this new data. I've been playing around with TF for a while now but I'm baffled by this code. Any thoughts would be great.
TensorFlow creates a graph with the weights and biases. Roughly speaking while you train this neural net the weights and biases get changed so it produces expected outputs. The line 131 in fully_connected_feed.py (with tf.Graph().as_default():) is used to tell TensorFlow to use the default graph. Therefore every line in the training loop including the calls of the do_eval() function use the default graph. Since the weights obtained from training are not resetted before evaluation they are used for it.
eval_correct is the operation used instead of the training operation to just evaluate the neural net without training it. This is important because otherwise the neural net would be trained to them which would result in distorted (too good) results.

Python/Keras - Creating a callback with one prediction for each epoch

I'm using Keras to predict a time series. As standard I'm using 20 epochs. I want to know what did my neural network predict for each one of the 20 epochs.
By using model.predict I'm getting only one prediction among all epochs (not sure how Keras select it). I want all predictions, or at least the 10 best.
According to a previous answer I got, I should compute the predictions after each training epoch by implementing an appropriate callback by subclassing Callback() and calling predict on the model inside the on_epoch_end function.
Well, the theory seems in shape but I'm in trouble to code that. Would anyone be able to give a code example on that?
Not sure how to implement the Callback() subclassing and neither how to mix that with the model.predict inside an on_epoch_end function.
Your help will be highly appreciated :)
EDIT
Well, I evolved a little bit.
Found out how to create the subclass and how to link it to the model.predict.
However, I'm burning my brain on how to create a list with all the predictions. Below is my current code:
#Creating a Callback subclass that stores each epoch prediction
class prediction_history(Callback):
def on_epoch_end(self, epoch, logs={}):
self.predhis=(model.predict(predictor_train))
#Calling the subclass
predictions=prediction_history()
#Executing the model.fit of the neural network
model.fit(X=predictor_train, y=target_train, nb_epoch=2, batch_size=batch,validation_split=0.1,callbacks=[predictions])
#Printing the prediction history
print predictions.predhis
However, all I'm getting with that is a list of predictions of the last epoch (same effect as printing model.predict(predictor_train)).
The question now is: How do I adapt my code so it adds to predhis the predictions of each one of the epochs?
You are overwriting the prediction for each epoch, that is why it doesn't work. I would do it like this:
class prediction_history(Callback):
def __init__(self):
self.predhis = []
def on_epoch_end(self, epoch, logs={}):
self.predhis.append(model.predict(predictor_train))
This way self.predhis is now a list and each prediction is appended to the list at the end of each epoch.

Categories

Resources