I am working on creating a custom loss function in Keras.
Here is an example.
import keras.backend as K
def test(y_true, y_pred):
loss = K.square(y_pred - y_true)
loss = K.mean(loss, axis = 1)
return loss
Now in this example, I would like to only subtract let's say specific values
from y_pred, but since this is in tensorflow, how do I iterate throw them.
For example, can I iterate through y_pred to pick values? and how?
Lets say for this example, the batch size is 5.
I have tried things such as
y_pred[0...i]
tf.arange and many more...
Just pass it when you are compiling model. Like
model.compile(optimizer='sgd', loss = test)
Keras will Iterate over it automatically.
You have also intentaion error in return statement.
import keras.backend as K
def test(y_true, y_pred):
loss = K.square(y_pred - y_true)
loss = K.mean(loss, axis = 1)
return loss
def test_accuracy(y_true, y_pred):
return 1 - test(y_true, y_pred)
By this way you can pass your custom loss function to the model and you can also pass accuracy funtion similarly
model.compile(optimizer='sgd', loss = test, metrics=[test_accuracy])
Related
Is there a way to have two loss functions in Keras in which the second loss function takes the output from the first loss function?
I am working on a Neural Network with Keras and I want to add another custom function to the Loss term inside the model.compile() to regularize and somehow penalize it, which is the form:
model.compile(loss_1='mean_squared_error', optimizer=Adam(lr=learning_rate), metrics=['mae'])
I would like to add another loss function as a sum of the predicted values from the Loss_1 outputs so that I can tell the Neural Network to minimize the sum of the predicted values from the Loss_1 model. How can I do that (loss_2)?
Something like:
model.compile(loss_1='mean_squared_error', loss_2= np.sum(****PREDICTED_OUTPUT_FROM_LOSS_FUNCTION_1****), optimizer=Adam(lr=learning_rate), metrics=['mae'])
how can this be implemented?
You should define a custom loss function
def custom_loss_function(y_true, y_pred):
squared_difference = tf.square(y_true - y_pred)
absolute_difference = tf.abs(y_true - y_pred)
loss = tf.reduce_mean(squared_difference, axis=-1) + tf.reduce_mean(absolute_difference, axis=-1)
return loss
model.compile(optimizer='adam', loss=custom_loss_function)
I believe that would solve your problem
I have a regression task and am measuring fit using Euclidean distance. Instead of displaying the mean squared error as the loss, I want to display the sum of squares. That is, I want to only sum over the square error terms and not divide by the number of examples.
On a batch level I can achieve this by defining a custom loss like so (maybe I could instead use tf.keras.losses.MeanSquareError directly):
class CustomLoss(tf.keras.losses.Loss):
def call(self, Y_true, Y_pred):
return tf.reduce_sum(tf.math.abs(Y_true-Y_pred) ** 2, axis=-1)
target_loss=CustomLoss(reduction=tf.keras.losses.Reduction.SUM)
Which will compute the square error for each example and then instruct TensorFlow to SUM over the examples to compute the batch loss instead of the default SUM_OVER_BATCH_SIZE (which should not be read literally, but as a fraction, i.e., SUM / BATCH_SIZE).
My problem is that, on an epoch level, Keras takes these sums and then computes the mean across steps (batches) to report the loss of the epoch. How do I get Keras to compute the sum over batches instead of the mean?
You will have to write a Custom Callback which will append losses after each batch to the list(as shown in shared link doc).
Implement the
on_epoch_end to get the sum of all the values in the list(where you added all the batch losses)
If you want to minimize sum of losses over all the batches, use K.Function API. Full implementation
You can sum over the batches in tf.keras.metric.Metric like below, but right now there is an open issue pending in 2.4.x (please see this GitHub issue), you can try with 2.3.2 though,
class AddAllOnes(tf.keras.metrics.Metric):
""" A simple metric that adds all the one's in current batch and suppose to return the total ones seen at every end of batch"""
def __init__(self, name="add_all_ones", **kwargs):
super(AddAllOnes, self).__init__(name=name, **kwargs)
self.total = self.add_weight(name="total", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
self.total.assign_add(tf.cast(tf.reduce_sum(y_true), dtype=tf.float32))
def result(self):
print('')
print('inside result...', self.total)
return self.total
X_train = np.random.random((512, 8))
y_train = np.random.randint(0, 2, (512, 1))
K.clear_session()
model_inputs = Input(shape=(8,))
model_unit = Dense(256, activation='linear', use_bias=False)(model_inputs)
model_unit = BatchNormalization()(model_unit)
model_unit = Activation('sigmoid')(model_unit)
model_outputs = Dense(1, activation='sigmoid')(model_unit)
optim = Adam(learning_rate=0.001)
model = Model(inputs=model_inputs, outputs=model_outputs)
model.compile(loss='binary_crossentropy', optimizer=optim, metrics=[AddAllOnes()], run_eagerly=True)
model.fit(X_train, y_train, verbose=1, batch_size=32)
I am building an image segmentation model using keras and I want to train my model on multiple loss functions. I have seen this link but I am looking for a simpler and straight-forward solutions for this situation as my loss functions are quite complex. Can someone tell me how to build a model with single output with multiple losses in keras.
You can use multiple losses with one output using weighted loss, which is a sum of your losses multiplied by weight. Create your custom loss which will return a sum of other losses with coefficients and pass it to model.compile. There is an example here.
This is just an example from here. You could play around with it.
def custom_losses(y_true, y_pred):
alpha = 0.6
squared_difference = tf.square(y_true - y_pred)
Huber = tf.keras.losses.huber(y_true, y_pred)
return tf.reduce_mean(squared_difference, axis=-1) + (alpha*Huber)
model.compile(optimizer='adam', loss=custom_losses,metrics=['MeanSquaredError'])
I am writing a keras custom loss function where in I want to pass to this function the following:
y_true, y_pred (these two will be passed automatically anyway), weights of a layer inside the model, and a constant.
Something like below:
def Custom_loss(y_true, y_pred, layer_weights, val = 0.01):
loss = mse(y_true, y_pred)
loss += K.sum(val, K.abs(K.sum(K.square(layer_weights), axis=1)))
return loss
But the above implementation gives me error.
How can I achieve this in keras ?
New answer
I think you're looking exactly for L2 regularization. Just create a regularizer and add it in the layers:
from keras.regularizers import l2
#in the target layers, Dense, Conv2D, etc.:
layer = Dense(units, ..., kernel_regularizer = l2(some_coefficient))
You can use bias_regularizer as well.
The some_coefficient var is multiplied by the square value of the weight.
PS: if val in your code is constant, it should not harm your loss. But you can still use the old answer below for val.
Old answer
Wrap the Keras expected function (with two parameters) into an outer function with your needs:
def customLoss(layer_weights, val = 0.01):
def lossFunction(y_true,y_pred):
loss = mse(y_true, y_pred)
loss += K.sum(val, K.abs(K.sum(K.square(layer_weights), axis=1)))
return loss
return lossFunction
model.compile(loss=customLoss(weights,0.03), optimizer =..., metrics = ...)
Notice that layer_weights must come directly from the layer as a "tensor", so you can't use get_weights(), you must go with someLayer.kernel and someLayer.bias. (Or the respective var name in case of layers that use different names for their trainable parameters).
The answer here shows how to deal with that if your external vars are variable with batches: How to define custom cost function that depends on input when using ImageDataGenerator in Keras?
You can do this another way by using the lambda operator as following:
model.compile(loss= [lambda y_true,y_pred: Custom_loss(y_true, y_pred, val=0.01)], optimizer =...)
There are some issues regarding saving and loading the model this way. A workaround is to save only the weights and use model.load_weights(...)
I am trying to implement a custom objective function in keras frame.
Respectively a weighted average function that takes the two arguments tensors y_true and y_pred ; the weights information is derived from y_true tensor.
Is there a weighted average function in tensorflow ?
Or any other suggestions on how to implement this kind of loss function ?
My function would look something like this:
function(y_true,y_pred)
A=(y_true-y_pred)**2
w - derivable from y_true, tensor of same shape as y_true
return average(A, weights=w) <-- a scalar
y_true and y_pred are 3D tensors.
you can use one of the existing objectives (also called loss) on keras from here.
you may also implement your own custom function loss:
from keras import backend as K
def my_loss(y_true, y_pred):
return K.mean(K.square(y_pred - y_true), axis=-1)
# Let's train the model using RMSprop
model.compile(loss=my_loss, optimizer='SGD', metrics=['accuracy'])
notice the K module, its the keras backend you should use to fully utilize keras performance, dont do something like this unless you dont care from performance issues:
def my_bad_and_slow_loss(y_true, y_pred):
return sum((y_pred - y_true) ** 2, axis=-1)
for your specific case, please write your desired objective function if you need help to write it.
Update
you can try this to provide weights - W as loss function:
def my_loss(y_true, y_pred):
W = np.arange(9) / 9. # some example W
return K.mean(K.pow(y_true - y_pred, 2) * W)