CNN Image Segmentation - Train Loss decreasing but Val Loss constant? - python

my U-Net train_dice_loss is decreasing but my val_dice_loss remains at 0.4. It looks like the network is overfitting but shouldn't val_dice_loss increase at some point?
The network is based on the Carvana Segmentation Competition (Colab Carvana Segmentation). I use the same model, target function and data augmentation pipeline, but I've got much less data (~1900 Images a 256x256px). I split my data into a Training, Validation and Test-Set. On the Test Set my model predicts quite well (~ average dice_coeff 0.75) but I can't explain this graph.
Additional Information:
def dice_coeff(y_true, y_pred):
smooth = 1.
# Flatten
y_true_f = tf.reshape(y_true, [-1])
y_pred_f = tf.reshape(y_pred, [-1])
intersection = tf.reduce_sum(y_true_f * y_pred_f)
score = (2. * intersection + smooth) / (tf.reduce_sum(y_true_f) +
tf.reduce_sum(y_pred_f) + smooth)
return score
def dice_loss(y_true, y_pred):
loss = 1 - dice_coeff(y_true, y_pred)
return loss
def bce_dice_loss(y_true, y_pred):
loss = losses.binary_crossentropy(y_true, y_pred) + dice_loss(y_true, y_pred)
return loss
I also tried different Splits and Keras Optimizer. It always ends at ~ 0.4.

Related

Tensorflow custom loss for multiple outputs model

I am working on a image segmentation and classification model which take one 3D image (64x64x64) and return two outputs (3D mask 64x64x64 and one-hot encoded category). Two outputs has been defined like this:
seg_final_stage = Conv3D(seg_classes, kernel_size=3, strides=1, padding="same", name="Seg_Final")(decoder1)
output_seg = Activation('sigmoid', name = "Seg_Final_Sigmoid")(seg_final_stage)
class_final = Dense(texture_classes, name="Class_Final")(class_dense3)
output_class = Activation('softmax', name = "Class_Final_SoftMax")(class_final)
model = Model(input, [output_seg, output_class], name = Config["Model_Name"])
There are two different method that I have tried but both failed, it seems like each output will return its own loss instead of just one loss. Following are the loss function I have now.
# Custom loss function
def dice_coef(y_true, y_pred):
smooth = 1.
y_true_f = tf.reshape(y_true, [-1])
y_pred_f = tf.reshape(y_pred, [-1])
intersection = tf.reduce_sum(y_true_f * y_pred_f)
score = (2. * intersection + smooth) / (tf.reduce_sum(y_true_f) + tf.reduce_sum(y_pred_f) + smooth)
return score
def dice_loss(y_true, y_pred):
return 1 - dice_coef(y_true, y_pred)
def CCE(y_true, y_pred):
class_loss = CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
return class_loss(y_true, y_pred)
def hybrid(y_true, y_pred):
mask_weight = 0.8
class_weight = 0.2
mask_loss = dice_loss(y_true[0], y_pred[0])
class_loss = CCE(y_true[1], y_pred[1])
return mask_weight * mask_loss + class_weight * class_loss
The first try:
I use single hybrid loss function that try to sum the dice loss and CCE together with weights. However, the model returns 2 losses like output_seg_hybrid and output_class_hybrid. I thought the y_true and y_pred are formed as array so that I can that the first item to calculate dice and the second item for CCE.
model.compile(loss=hybrid,
optimizer=Nadam(learning_rate=Config["learning_rate"], beta_1=0.9, beta_2=0.999, epsilon=1e-07, name="Nadam"),
metrics=[hybrid])
The second try:
I applied 2 losses like below to ensure the output has corresponding loss so it can return 2 loss and sum-up with weights. What I get is actually 4 losses like Seg_Final_Sigmoid_dice_loss, Class_Final_SoftMax_dice_loss, Seg_Final_Sigmoid_CCE and Class_Final_SoftMax_CCE.
model.compile(loss={'Seg_Final_Sigmoid':dice_loss, 'Class_Final_SoftMax':CCE},
optimizer=model.optimizer, metrics=[dice_loss, CCE], loss_weights = [0.8,0.2])
What should I do if I just want to sum-up these two losses as one loss?

Why getting NaN values for custom Dice loss in Keras?

I am using Keras for boundary/contour detection using a Unet. When I use binary cross-entropy as the loss, the losses decrease over time as expected the predicted boundaries look reasonable
However, I have tried custom loss for Dice with varying LRs, none of them are working well.
smooth = 1e-6
def dice_coef(y_true, y_pred):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
def dice(y_true, y_pred):
return 1-dice_coef(y_true, y_pred)
the loss values don't improve. That is, it will show something like
loss: nan - dice: .9607 - val_loss: nan - val_dice: .9631
I get NaNs for the losses and values for dice and val_dice that barely change as the epochs iterate. This is regardless of what I use for the LR, whether it be .01 to 1e-6
The dimensions of the train images/labels looks like N x H x W x 1, where N is the number of images, H/W are the height/width of each image
can anyone help?

TensorFlow 2 custom loss: "No gradients provided for any variable" error

I have an image segmentation problem I have to solve in TensorFlow 2.
In particular I have a training set composed by aerial images paired with their respective masks. In a mask the terrain is colored in black and the buildings are colored in white. The purpose is to predict the mask for the images in the test set.
I use a UNet with a final Conv2DTranspose with 1 filter and a sigmoid activation function. The prediction is made in the following way on the output of the final sigmoid layer: if y_pred>0.5, then it's a building, otherwise it's the background.
I want to implement a dice loss, so I wrote the following function
def dice_loss(y_true, y_pred):
print("[dice_loss] y_pred=",y_pred,"y_true=",y_true)
y_pred = tf.cast(y_pred > 0.5, tf.float32)
y_true = tf.cast(y_true, tf.float32)
numerator = 2 * tf.reduce_sum(y_true * y_pred)
denominator = tf.reduce_sum(y_true + y_pred)
return 1 - numerator / denominator
which I pass to TensorFlow in the following way:
loss = dice_loss
optimizer = tf.keras.optimizers.Adam(learning_rate=config.learning_rate)
metrics = [my_IoU, 'acc']
model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
but at training time TensorFlow throw me the following error:
ValueError: No gradients provided for any variable:
The problem is in your loss function (obviously). Particularly, the following operation.
y_pred = tf.cast(y_pred > 0.5, tf.float32)
This is not a differentiable operation. Which results in Gradients being None. Change your loss function to the following and it will work.
def dice_loss(y_true, y_pred):
print("[dice_loss] y_pred=",y_pred,"y_true=",y_true)
y_true = tf.cast(y_true, tf.float32)
numerator = 2 * tf.reduce_sum(y_true * y_pred)
denominator = tf.reduce_sum(y_true + y_pred)
return 1 - numerator / denominator

Custom loss function in Keras for weighting missclassified samples

Assume that y_true and y_pred are in [-1,1]. I want a weighted mean-square-error loss function, in which the loss for samples that are positive in the y_true and negative in y_pred or vice versa are weighted by exp(alpha). Here is my code:
import keras.backend as K
alpha = 1.0
def custom_loss(y_true, y_pred):
se = K.square(y_pred-y_true)
true_label = K.less_equal(y_true,0.0)
pred_label = K.less_equal(y_pred,0.0)
return K.mean(se * K.exp(alpha*K.cast(K.not_equal(true_label,pred_label), tf.float32)))
And here is a plot of this loss function. Different curves are for different values for y_true.
I want to know:
Whether this is a valid loss function, since it is not differentiable in 0?
Is my code correct?
I suggest you this type of loss function to handle imbalance dataset
def focal_loss(y_true, y_pred):
gamma = 2.0, alpha = 0.25
pt_1 = tf.where(tf.equal(y_true, 1), y_pred, tf.ones_like(y_pred))
pt_0 = tf.where(tf.equal(y_true, 0), y_pred, tf.zeros_like(y_pred))
return -K.sum(alpha * K.pow(1. - pt_1, gamma) * K.log(pt_1))-K.sum((1-alpha) * K.pow(pt_0, gamma) * K.log(1. - pt_0))
from this source

Make a custom loss function in keras

Hi I have been trying to make a custom loss function in keras for dice_error_coefficient. It has its implementations in tensorboard and I tried using the same function in keras with tensorflow but it keeps returning a NoneType when I used model.train_on_batch or model.fit where as it gives proper values when used in metrics in the model. Can please someone help me out with what should i do? I have tried following libraries like Keras-FCN by ahundt where he has used custom loss functions but none of it seems to work. The target and output in the code are y_true and y_pred respectively as used in the losses.py file in keras.
def dice_hard_coe(target, output, threshold=0.5, axis=[1,2], smooth=1e-5):
"""References
-----------
- `Wiki-Dice <https://en.wikipedia.org/wiki/Sørensen–Dice_coefficient>`_
"""
output = tf.cast(output > threshold, dtype=tf.float32)
target = tf.cast(target > threshold, dtype=tf.float32)
inse = tf.reduce_sum(tf.multiply(output, target), axis=axis)
l = tf.reduce_sum(output, axis=axis)
r = tf.reduce_sum(target, axis=axis)
hard_dice = (2. * inse + smooth) / (l + r + smooth)
hard_dice = tf.reduce_mean(hard_dice)
return hard_dice
There are two steps in implementing a parameterized custom loss function in Keras. First, writing a method for the coefficient/metric. Second, writing a wrapper function to format things the way Keras needs them to be.
It's actually quite a bit cleaner to use the Keras backend instead of tensorflow directly for simple custom loss functions like DICE. Here's an example of the coefficient implemented that way:
import keras.backend as K
def dice_coef(y_true, y_pred, smooth, thresh):
y_pred = y_pred > thresh
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
Now for the tricky part. Keras loss functions must only take (y_true, y_pred) as parameters. So we need a separate function that returns another function.
def dice_loss(smooth, thresh):
def dice(y_true, y_pred)
return -dice_coef(y_true, y_pred, smooth, thresh)
return dice
Finally, you can use it as follows in Keras compile.
# build model
model = my_model()
# get the loss function
model_dice = dice_loss(smooth=1e-5, thresh=0.5)
# compile model
model.compile(loss=model_dice)
According to the documentation, you can use a custom loss function like this:
Any callable with the signature loss_fn(y_true, y_pred) that returns an array of losses (one of sample in the input batch) can be passed to compile() as a loss. Note that sample weighting is automatically supported for any such loss.
As a simple example:
def my_loss_fn(y_true, y_pred):
squared_difference = tf.square(y_true - y_pred)
return tf.reduce_mean(squared_difference, axis=-1) # Note the `axis=-1`
model.compile(optimizer='adam', loss=my_loss_fn)
Complete example:
import tensorflow as tf
import numpy as np
def my_loss_fn(y_true, y_pred):
squared_difference = tf.square(y_true - y_pred)
return tf.reduce_mean(squared_difference, axis=-1) # Note the `axis=-1`
model = tf.keras.Sequential([
tf.keras.layers.Dense(8, activation='relu'),
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(1)])
model.compile(optimizer='adam', loss=my_loss_fn)
x = np.random.rand(1000)
y = x**2
history = model.fit(x, y, epochs=10)
In addition, you can extend an existing loss function by inheriting from it. For example masking the BinaryCrossEntropy:
class MaskedBinaryCrossentropy(tf.keras.losses.BinaryCrossentropy):
def call(self, y_true, y_pred):
mask = y_true != -1
y_true = y_true[mask]
y_pred = y_pred[mask]
return super().call(y_true, y_pred)
A good starting point is the custom log guide: https://www.tensorflow.org/guide/keras/train_and_evaluate#custom_losses

Categories

Resources