Handling and Combining two loss function in Keras TF - python

Is there a way to have two loss functions in Keras in which the second loss function takes the output from the first loss function?
I am working on a Neural Network with Keras and I want to add another custom function to the Loss term inside the model.compile() to regularize and somehow penalize it, which is the form:
model.compile(loss_1='mean_squared_error', optimizer=Adam(lr=learning_rate), metrics=['mae'])
I would like to add another loss function as a sum of the predicted values from the Loss_1 outputs so that I can tell the Neural Network to minimize the sum of the predicted values from the Loss_1 model. How can I do that (loss_2)?
Something like:
model.compile(loss_1='mean_squared_error', loss_2= np.sum(****PREDICTED_OUTPUT_FROM_LOSS_FUNCTION_1****), optimizer=Adam(lr=learning_rate), metrics=['mae'])
how can this be implemented?

You should define a custom loss function
def custom_loss_function(y_true, y_pred):
squared_difference = tf.square(y_true - y_pred)
absolute_difference = tf.abs(y_true - y_pred)
loss = tf.reduce_mean(squared_difference, axis=-1) + tf.reduce_mean(absolute_difference, axis=-1)
return loss
model.compile(optimizer='adam', loss=custom_loss_function)
I believe that would solve your problem

Related

Why is my Keras custom loss function not working?

I wrote a squared loss function for categorisation of one hot encoded data
def squared_categorical_loss(y_true, y_pred):
return K.mean(K.square(1.0 - K.sum(y_true * y_pred, axis=(1))))
which works when given numpy array examples, such as
y_true = np.asarray([[1,0,0],[0,1,0]])
y_pred = np.asarray([[0.5,0.2,0.3],[0.4,0.6,0]])
squared_categorical_loss(y_true, y_pred)
The example above returns a tensor with the value 0.205 which is the mean of (1-0.5)^2 and (1-0.6)^2, which is the desired result and what should be an optimisable loss function that generally correlates with accuracy but when I apply it to a TensorFlow model
model.compile(optimizer='adam',
loss=squared_categorical_loss,
metrics=['accuracy'])
the loss decreases to extremely small values while the training accuracy stays below 50% which shouldn't be possible as a loss below 0.125 couldn't be mathematically achieved without the accuracy being above 50% so what is wrong with my implementation?
Thanks!
It will work only if y_pred is normalized (sum equals to 1).
I think that you forgot to apply softmax in the last layer of your model.

Custom Loss Function in Keras - Iterate through TensorFlow

I am working on creating a custom loss function in Keras.
Here is an example.
import keras.backend as K
def test(y_true, y_pred):
loss = K.square(y_pred - y_true)
loss = K.mean(loss, axis = 1)
return loss
Now in this example, I would like to only subtract let's say specific values
from y_pred, but since this is in tensorflow, how do I iterate throw them.
For example, can I iterate through y_pred to pick values? and how?
Lets say for this example, the batch size is 5.
I have tried things such as
y_pred[0...i]
tf.arange and many more...
Just pass it when you are compiling model. Like
model.compile(optimizer='sgd', loss = test)
Keras will Iterate over it automatically.
You have also intentaion error in return statement.
import keras.backend as K
def test(y_true, y_pred):
loss = K.square(y_pred - y_true)
loss = K.mean(loss, axis = 1)
return loss
def test_accuracy(y_true, y_pred):
return 1 - test(y_true, y_pred)
By this way you can pass your custom loss function to the model and you can also pass accuracy funtion similarly
model.compile(optimizer='sgd', loss = test, metrics=[test_accuracy])

Training a model with single output on multiple losses keras

I am building an image segmentation model using keras and I want to train my model on multiple loss functions. I have seen this link but I am looking for a simpler and straight-forward solutions for this situation as my loss functions are quite complex. Can someone tell me how to build a model with single output with multiple losses in keras.
You can use multiple losses with one output using weighted loss, which is a sum of your losses multiplied by weight. Create your custom loss which will return a sum of other losses with coefficients and pass it to model.compile. There is an example here.
This is just an example from here. You could play around with it.
def custom_losses(y_true, y_pred):
alpha = 0.6
squared_difference = tf.square(y_true - y_pred)
Huber = tf.keras.losses.huber(y_true, y_pred)
return tf.reduce_mean(squared_difference, axis=-1) + (alpha*Huber)
model.compile(optimizer='adam', loss=custom_losses,metrics=['MeanSquaredError'])

Individual loss of each (final-layer) output of Keras model

When training a ANN for regression, Keras stores the train/validation loss in a History object. In the case of multiple outputs in the final layer with a standard loss function, i.e. the Mean Squared Error or MSE:
what does the loss represent in the multi-output scenario? Is it the average/mean of the individual losses of all outputs or is it something else?
Can I somehow access the loss of each output individually without implementing a custom loss function?
Any hints would be much appreciated.
EDIT------------
model = Sequential()
model.add(LSTM(10, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(2))
model.compile(loss='mse', optimizer='adam')
Re-phrasing my question after adding the snippet:
How is the loss calculated in the case of two neurons in the output layer and what does the resulting loss represent? Is it the average loss for both outputs?
The standard MSE loss is implemented in Keras as follows:
def mse_loss(y_true, y_pred):
return K.mean(K.square(y_pred - y_true), axis=-1)
If you now have multiple neurons at the output layer, the computed loss will simply be the mean of the squared-loss of all individual neurons.
If you want the loss of each individual output to be tracked you have to write an own metric for that. If you want to keep it as simple as possible you can just use following metric (it has to be nested since Keras only allows a metric to have inputs y_true and y_pred):
def inner_part_custom_metric(y_true, y_pred, i):
d = y_pred-y_true
square_d = K.square(d)
return square_d[:,i] #y has shape [batch_size, output_dim]
def custom_metric_output_i(i):
def custom_metric_i(y_true, y_pred):
return inner_part_custom_metric(y_true, y_pred, i)
return custom_metric_i
Now, say you have 2 output neurons. Create 2 instances of this metric:
metrics = [custom_metric_output_i(0), custom_metric_output_i(1)]
Then compile your model as follows:
model = ...
model.compile(loss='mean_squared_error', optimizer=SGD(lr=0.01), metrics=metrics)
history = model.fit(...)
Now you can access the loss of each individual neuron in the history object. Use following to command to see what's in the history object:
print(history.history.keys())
print(history.history.keys())
and then:
print(history.history['custom_metric_i'])
like stated before, will actually print the history for only one dimension!

weighted average of tensor

I am trying to implement a custom objective function in keras frame.
Respectively a weighted average function that takes the two arguments tensors y_true and y_pred ; the weights information is derived from y_true tensor.
Is there a weighted average function in tensorflow ?
Or any other suggestions on how to implement this kind of loss function ?
My function would look something like this:
function(y_true,y_pred)
A=(y_true-y_pred)**2
w - derivable from y_true, tensor of same shape as y_true
return average(A, weights=w) <-- a scalar
y_true and y_pred are 3D tensors.
you can use one of the existing objectives (also called loss) on keras from here.
you may also implement your own custom function loss:
from keras import backend as K
def my_loss(y_true, y_pred):
return K.mean(K.square(y_pred - y_true), axis=-1)
# Let's train the model using RMSprop
model.compile(loss=my_loss, optimizer='SGD', metrics=['accuracy'])
notice the K module, its the keras backend you should use to fully utilize keras performance, dont do something like this unless you dont care from performance issues:
def my_bad_and_slow_loss(y_true, y_pred):
return sum((y_pred - y_true) ** 2, axis=-1)
for your specific case, please write your desired objective function if you need help to write it.
Update
you can try this to provide weights - W as loss function:
def my_loss(y_true, y_pred):
W = np.arange(9) / 9. # some example W
return K.mean(K.pow(y_true - y_pred, 2) * W)

Categories

Resources