I am working on a image segmentation and classification model which take one 3D image (64x64x64) and return two outputs (3D mask 64x64x64 and one-hot encoded category). Two outputs has been defined like this:
seg_final_stage = Conv3D(seg_classes, kernel_size=3, strides=1, padding="same", name="Seg_Final")(decoder1)
output_seg = Activation('sigmoid', name = "Seg_Final_Sigmoid")(seg_final_stage)
class_final = Dense(texture_classes, name="Class_Final")(class_dense3)
output_class = Activation('softmax', name = "Class_Final_SoftMax")(class_final)
model = Model(input, [output_seg, output_class], name = Config["Model_Name"])
There are two different method that I have tried but both failed, it seems like each output will return its own loss instead of just one loss. Following are the loss function I have now.
# Custom loss function
def dice_coef(y_true, y_pred):
smooth = 1.
y_true_f = tf.reshape(y_true, [-1])
y_pred_f = tf.reshape(y_pred, [-1])
intersection = tf.reduce_sum(y_true_f * y_pred_f)
score = (2. * intersection + smooth) / (tf.reduce_sum(y_true_f) + tf.reduce_sum(y_pred_f) + smooth)
return score
def dice_loss(y_true, y_pred):
return 1 - dice_coef(y_true, y_pred)
def CCE(y_true, y_pred):
class_loss = CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
return class_loss(y_true, y_pred)
def hybrid(y_true, y_pred):
mask_weight = 0.8
class_weight = 0.2
mask_loss = dice_loss(y_true[0], y_pred[0])
class_loss = CCE(y_true[1], y_pred[1])
return mask_weight * mask_loss + class_weight * class_loss
The first try:
I use single hybrid loss function that try to sum the dice loss and CCE together with weights. However, the model returns 2 losses like output_seg_hybrid and output_class_hybrid. I thought the y_true and y_pred are formed as array so that I can that the first item to calculate dice and the second item for CCE.
model.compile(loss=hybrid,
optimizer=Nadam(learning_rate=Config["learning_rate"], beta_1=0.9, beta_2=0.999, epsilon=1e-07, name="Nadam"),
metrics=[hybrid])
The second try:
I applied 2 losses like below to ensure the output has corresponding loss so it can return 2 loss and sum-up with weights. What I get is actually 4 losses like Seg_Final_Sigmoid_dice_loss, Class_Final_SoftMax_dice_loss, Seg_Final_Sigmoid_CCE and Class_Final_SoftMax_CCE.
model.compile(loss={'Seg_Final_Sigmoid':dice_loss, 'Class_Final_SoftMax':CCE},
optimizer=model.optimizer, metrics=[dice_loss, CCE], loss_weights = [0.8,0.2])
What should I do if I just want to sum-up these two losses as one loss?
Related
I have an image segmentation problem I have to solve in TensorFlow 2.
In particular I have a training set composed by aerial images paired with their respective masks. In a mask the terrain is colored in black and the buildings are colored in white. The purpose is to predict the mask for the images in the test set.
I use a UNet with a final Conv2DTranspose with 1 filter and a sigmoid activation function. The prediction is made in the following way on the output of the final sigmoid layer: if y_pred>0.5, then it's a building, otherwise it's the background.
I want to implement a dice loss, so I wrote the following function
def dice_loss(y_true, y_pred):
print("[dice_loss] y_pred=",y_pred,"y_true=",y_true)
y_pred = tf.cast(y_pred > 0.5, tf.float32)
y_true = tf.cast(y_true, tf.float32)
numerator = 2 * tf.reduce_sum(y_true * y_pred)
denominator = tf.reduce_sum(y_true + y_pred)
return 1 - numerator / denominator
which I pass to TensorFlow in the following way:
loss = dice_loss
optimizer = tf.keras.optimizers.Adam(learning_rate=config.learning_rate)
metrics = [my_IoU, 'acc']
model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
but at training time TensorFlow throw me the following error:
ValueError: No gradients provided for any variable:
The problem is in your loss function (obviously). Particularly, the following operation.
y_pred = tf.cast(y_pred > 0.5, tf.float32)
This is not a differentiable operation. Which results in Gradients being None. Change your loss function to the following and it will work.
def dice_loss(y_true, y_pred):
print("[dice_loss] y_pred=",y_pred,"y_true=",y_true)
y_true = tf.cast(y_true, tf.float32)
numerator = 2 * tf.reduce_sum(y_true * y_pred)
denominator = tf.reduce_sum(y_true + y_pred)
return 1 - numerator / denominator
So my question is, if I have something like:
model = Model(inputs = input, outputs = [y1,y2])
model.compile(loss = my_loss ...)
I have only seen my_loss as a dictionary of independent losses and, then, the final loss is defined as the sum of those. But, can I define in a multitask model a loss function that take all the predicted/true values and then I can multiply them (for instance)?
This is the loss I am trying to define:
def my_loss(y_true1, y_true2, y_pred1, y_pred2):
final_loss = binary_crossentropy(y_true1, y_pred1) + y_true1 * categorical_crossentropy(y_true2, y_pred2)
return final_loss
Usually, your paramaters are y_true, y_pred in the loss function, where y_pred is either y1 or y2. But now I need both to compute the loss, so how can I define this loss function and pass all the parameters to the function: y_true1, y_true2, y_pred1, y_pred2.
My current model that I want to change its loss:
x = Input(shape=(n, ))
shared = Dense(32)(x)
sub1 = Dense(16)(shared)
sub2 = Dense(16)(shared)
y1 = Dense(1)(sub1, activation='sigmoid')
y2 = Dense(4)(sub2, activation='softmax')
model = Model(inputs = input, outputs = [y1,y2])
model.compile(loss = ['binary_crossentropy', 'categorical_crossentropy'] ...) #THIS LINE I WANT TO CHANGE IT
Thanks!
I'm not sure if I'm understanding correctly, but I'll try.
The loss function must contain both the predicted and the actual data -- it's a way to measure the error between what your model is predicting and the true data. However, the predicted and actual data do not need to be one-dimensional. You can make y_pred a tensor that contains both y_pred1 and y_pred2. Likewise, y_true can be a tensor that contains both y_true1 and y_true2.
As far as I know, loss functions should return a single number. That's why loss functions often have a mean or a sum to add up all of the losses for individual data points.
Here's an example of mean square error that will work for more than 1D:
import keras.backend as K
def my_loss(y_true, y_pred):
# this example is mean squared error
# works if if y_pred and y_true are greater than 1D
return K.mean(K.square(y_pred - y_true))
Here's another example of a loss function that I think is closer to your question (although I cannot comment on whether or not it's a good loss function):
def my_loss(y_true, y_pred):
# calculate mean(abs(y_pred1*y_pred2 - y_true1*ytrue2))
# this will work for 2D inputs of y_pred and y_true
return K.mean(K.abs(K.prod(y_pred, axis = 1) - K.prod(y_true, axis = 1)))
Update:
You can concatenate two outputs into a single tensor with keras.layers.Concatenate. That way you can still have a loss function with only two arguments.
In the model you wrote above, the y1 output shape is (None, 1) and the y2 output shape is (None, 4). Here's an example of how you could write your model so that the output is a single tensor that concatenates y1 and y1 into a shape of (None, 5):
from keras import Model
from keras.layers import Input, Dense
from keras.layers import Concatenate
input_layer = Input(shape=(n, ))
shared = Dense(32)(input_layer)
sub1 = Dense(16)(shared)
sub2 = Dense(16)(shared)
y1 = Dense(1, activation='sigmoid')(sub1)
y2 = Dense(4, activation='softmax')(sub2)
mergedOutput = Concatenate()([y1, y2])
Below, I show an example for how you could rewrite your loss function. I wasn't sure which of the 5 columns of the output to call y_true1 vs. y_true2, so I guessed that y_true1 was column 1 and y_true2 was the remaining 4 columns. The same column structure would apply to y_pred1 and y_pred2.
from keras import losses
def my_loss(y_true, y_pred):
final_loss = (losses.binary_crossentropy(y_true[:, 0], y_pred[:, 0]) +
y_true[:, 0] *
losses.categorical_crossentropy(y_true[:, 1:], y_pred[:,1:]))
return final_loss
Finally, you can compile the model without any major changes from normal:
model.compile(optimizer='adam', loss=my_loss)
my U-Net train_dice_loss is decreasing but my val_dice_loss remains at 0.4. It looks like the network is overfitting but shouldn't val_dice_loss increase at some point?
The network is based on the Carvana Segmentation Competition (Colab Carvana Segmentation). I use the same model, target function and data augmentation pipeline, but I've got much less data (~1900 Images a 256x256px). I split my data into a Training, Validation and Test-Set. On the Test Set my model predicts quite well (~ average dice_coeff 0.75) but I can't explain this graph.
Additional Information:
def dice_coeff(y_true, y_pred):
smooth = 1.
# Flatten
y_true_f = tf.reshape(y_true, [-1])
y_pred_f = tf.reshape(y_pred, [-1])
intersection = tf.reduce_sum(y_true_f * y_pred_f)
score = (2. * intersection + smooth) / (tf.reduce_sum(y_true_f) +
tf.reduce_sum(y_pred_f) + smooth)
return score
def dice_loss(y_true, y_pred):
loss = 1 - dice_coeff(y_true, y_pred)
return loss
def bce_dice_loss(y_true, y_pred):
loss = losses.binary_crossentropy(y_true, y_pred) + dice_loss(y_true, y_pred)
return loss
I also tried different Splits and Keras Optimizer. It always ends at ~ 0.4.
Hi I have been trying to make a custom loss function in keras for dice_error_coefficient. It has its implementations in tensorboard and I tried using the same function in keras with tensorflow but it keeps returning a NoneType when I used model.train_on_batch or model.fit where as it gives proper values when used in metrics in the model. Can please someone help me out with what should i do? I have tried following libraries like Keras-FCN by ahundt where he has used custom loss functions but none of it seems to work. The target and output in the code are y_true and y_pred respectively as used in the losses.py file in keras.
def dice_hard_coe(target, output, threshold=0.5, axis=[1,2], smooth=1e-5):
"""References
-----------
- `Wiki-Dice <https://en.wikipedia.org/wiki/Sørensen–Dice_coefficient>`_
"""
output = tf.cast(output > threshold, dtype=tf.float32)
target = tf.cast(target > threshold, dtype=tf.float32)
inse = tf.reduce_sum(tf.multiply(output, target), axis=axis)
l = tf.reduce_sum(output, axis=axis)
r = tf.reduce_sum(target, axis=axis)
hard_dice = (2. * inse + smooth) / (l + r + smooth)
hard_dice = tf.reduce_mean(hard_dice)
return hard_dice
There are two steps in implementing a parameterized custom loss function in Keras. First, writing a method for the coefficient/metric. Second, writing a wrapper function to format things the way Keras needs them to be.
It's actually quite a bit cleaner to use the Keras backend instead of tensorflow directly for simple custom loss functions like DICE. Here's an example of the coefficient implemented that way:
import keras.backend as K
def dice_coef(y_true, y_pred, smooth, thresh):
y_pred = y_pred > thresh
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
Now for the tricky part. Keras loss functions must only take (y_true, y_pred) as parameters. So we need a separate function that returns another function.
def dice_loss(smooth, thresh):
def dice(y_true, y_pred)
return -dice_coef(y_true, y_pred, smooth, thresh)
return dice
Finally, you can use it as follows in Keras compile.
# build model
model = my_model()
# get the loss function
model_dice = dice_loss(smooth=1e-5, thresh=0.5)
# compile model
model.compile(loss=model_dice)
According to the documentation, you can use a custom loss function like this:
Any callable with the signature loss_fn(y_true, y_pred) that returns an array of losses (one of sample in the input batch) can be passed to compile() as a loss. Note that sample weighting is automatically supported for any such loss.
As a simple example:
def my_loss_fn(y_true, y_pred):
squared_difference = tf.square(y_true - y_pred)
return tf.reduce_mean(squared_difference, axis=-1) # Note the `axis=-1`
model.compile(optimizer='adam', loss=my_loss_fn)
Complete example:
import tensorflow as tf
import numpy as np
def my_loss_fn(y_true, y_pred):
squared_difference = tf.square(y_true - y_pred)
return tf.reduce_mean(squared_difference, axis=-1) # Note the `axis=-1`
model = tf.keras.Sequential([
tf.keras.layers.Dense(8, activation='relu'),
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(1)])
model.compile(optimizer='adam', loss=my_loss_fn)
x = np.random.rand(1000)
y = x**2
history = model.fit(x, y, epochs=10)
In addition, you can extend an existing loss function by inheriting from it. For example masking the BinaryCrossEntropy:
class MaskedBinaryCrossentropy(tf.keras.losses.BinaryCrossentropy):
def call(self, y_true, y_pred):
mask = y_true != -1
y_true = y_true[mask]
y_pred = y_pred[mask]
return super().call(y_true, y_pred)
A good starting point is the custom log guide: https://www.tensorflow.org/guide/keras/train_and_evaluate#custom_losses
I'm trying to implement sentence similarity architecture based on this work using the STS dataset. Labels are normalized similarity scores from 0 to 1 so it is assumed to be a regression model.
My problem is that the loss goes directly to NaN starting from the first epoch. What am I doing wrong?
I have already tried updating to latest keras and theano versions.
The code for my model is:
def create_lstm_nn(input_dim):
seq = Sequential()`
# embedd using pretrained 300d embedding
seq.add(Embedding(vocab_size, emb_dim, mask_zero=True, weights=[embedding_weights]))
# encode via LSTM
seq.add(LSTM(128))
seq.add(Dropout(0.3))
return seq
lstm_nn = create_lstm_nn(input_dim)
input_a = Input(shape=(input_dim,))
input_b = Input(shape=(input_dim,))
processed_a = lstm_nn(input_a)
processed_b = lstm_nn(input_b)
cos_distance = merge([processed_a, processed_b], mode='cos', dot_axes=1)
cos_distance = Reshape((1,))(cos_distance)
distance = Lambda(lambda x: 1-x)(cos_distance)
model = Model(input=[input_a, input_b], output=distance)
# train
rms = RMSprop()
model.compile(loss='mse', optimizer=rms)
model.fit([X1, X2], y, validation_split=0.3, batch_size=128, nb_epoch=20)
I also tried using a simple Lambda instead of the Merge layer, but it has the same result.
def cosine_distance(vests):
x, y = vests
x = K.l2_normalize(x, axis=-1)
y = K.l2_normalize(y, axis=-1)
return -K.mean(x * y, axis=-1, keepdims=True)
def cos_dist_output_shape(shapes):
shape1, shape2 = shapes
return (shape1[0],1)
distance = Lambda(cosine_distance, output_shape=cos_dist_output_shape)([processed_a, processed_b])
The nan is a common issue in deep learning regression. Because you are using Siamese network, you can try followings:
check your data: do they need to be normalized?
try to add an Dense layer into your network as the last layer, but be careful picking up an activation function, e.g. relu
try to use another loss function, e.g. contrastive_loss
smaller your learning rate, e.g. 0.0001
cos mode does not carefully deal with division by zero, might be the cause of NaN
It is not easy to make deep learning work perfectly.
I didn't run into the nan issue, but my loss wouldn't change. I found this info
check this out
def cosine_distance(shapes):
y_true, y_pred = shapes
def l2_normalize(x, axis):
norm = K.sqrt(K.sum(K.square(x), axis=axis, keepdims=True))
return K.sign(x) * K.maximum(K.abs(x), K.epsilon()) / K.maximum(norm, K.epsilon())
y_true = l2_normalize(y_true, axis=-1)
y_pred = l2_normalize(y_pred, axis=-1)
return K.mean(1 - K.sum((y_true * y_pred), axis=-1))