I am trying to implement a custom objective function in keras frame.
Respectively a weighted average function that takes the two arguments tensors y_true and y_pred ; the weights information is derived from y_true tensor.
Is there a weighted average function in tensorflow ?
Or any other suggestions on how to implement this kind of loss function ?
My function would look something like this:
function(y_true,y_pred)
A=(y_true-y_pred)**2
w - derivable from y_true, tensor of same shape as y_true
return average(A, weights=w) <-- a scalar
y_true and y_pred are 3D tensors.
you can use one of the existing objectives (also called loss) on keras from here.
you may also implement your own custom function loss:
from keras import backend as K
def my_loss(y_true, y_pred):
return K.mean(K.square(y_pred - y_true), axis=-1)
# Let's train the model using RMSprop
model.compile(loss=my_loss, optimizer='SGD', metrics=['accuracy'])
notice the K module, its the keras backend you should use to fully utilize keras performance, dont do something like this unless you dont care from performance issues:
def my_bad_and_slow_loss(y_true, y_pred):
return sum((y_pred - y_true) ** 2, axis=-1)
for your specific case, please write your desired objective function if you need help to write it.
Update
you can try this to provide weights - W as loss function:
def my_loss(y_true, y_pred):
W = np.arange(9) / 9. # some example W
return K.mean(K.pow(y_true - y_pred, 2) * W)
Related
Is there a way to have two loss functions in Keras in which the second loss function takes the output from the first loss function?
I am working on a Neural Network with Keras and I want to add another custom function to the Loss term inside the model.compile() to regularize and somehow penalize it, which is the form:
model.compile(loss_1='mean_squared_error', optimizer=Adam(lr=learning_rate), metrics=['mae'])
I would like to add another loss function as a sum of the predicted values from the Loss_1 outputs so that I can tell the Neural Network to minimize the sum of the predicted values from the Loss_1 model. How can I do that (loss_2)?
Something like:
model.compile(loss_1='mean_squared_error', loss_2= np.sum(****PREDICTED_OUTPUT_FROM_LOSS_FUNCTION_1****), optimizer=Adam(lr=learning_rate), metrics=['mae'])
how can this be implemented?
You should define a custom loss function
def custom_loss_function(y_true, y_pred):
squared_difference = tf.square(y_true - y_pred)
absolute_difference = tf.abs(y_true - y_pred)
loss = tf.reduce_mean(squared_difference, axis=-1) + tf.reduce_mean(absolute_difference, axis=-1)
return loss
model.compile(optimizer='adam', loss=custom_loss_function)
I believe that would solve your problem
I am working on creating a custom loss function in Keras.
Here is an example.
import keras.backend as K
def test(y_true, y_pred):
loss = K.square(y_pred - y_true)
loss = K.mean(loss, axis = 1)
return loss
Now in this example, I would like to only subtract let's say specific values
from y_pred, but since this is in tensorflow, how do I iterate throw them.
For example, can I iterate through y_pred to pick values? and how?
Lets say for this example, the batch size is 5.
I have tried things such as
y_pred[0...i]
tf.arange and many more...
Just pass it when you are compiling model. Like
model.compile(optimizer='sgd', loss = test)
Keras will Iterate over it automatically.
You have also intentaion error in return statement.
import keras.backend as K
def test(y_true, y_pred):
loss = K.square(y_pred - y_true)
loss = K.mean(loss, axis = 1)
return loss
def test_accuracy(y_true, y_pred):
return 1 - test(y_true, y_pred)
By this way you can pass your custom loss function to the model and you can also pass accuracy funtion similarly
model.compile(optimizer='sgd', loss = test, metrics=[test_accuracy])
I would like to verify my loss function because i have read that there are issues with the mse loss function in keras.
Consider a lstm model in keras predicting a 3d time series as multi targets (y1, y2, y3). Suppose the shape of a batch of output sequences is (10, 31, 1)
Will the loss function below take the squared difference between the predicted and true output, then take the mean of the 310 samples, resulting in a single loss value? How would this operation happen if the 3 outputs were concatenated as (10, 31, 3)
def mse(y_true, y_pred):
return keras.backend.mean(keras.backend.square(y_pred - y_true), axis=1)
If you want to get a single loss value, you need not to set axis.
import keras.backend as K
def mse(y_true, y_pred):
return K.mean(K.square(y_pred - y_true))
y_true = K.random_normal(shape=(10,31,3))
y_pred = K.random_normal(shape=(10,31,3))
loss = mse(y_true, y_pred)
print(K.eval(loss))
# print
2.0196152
I need to create a custom loss function in Keras and depending on the result of the conditional return two different loss values. I am having trouble getting the if statement to run properly.
I need to do something similar to this:
def custom_loss(y_true, y_pred):
sees = tf.Session()
const = 2
if (sees.run(tf.keras.backend.less(y_pred, y_true))): #i.e. y_pred - y_true < 0
return const * mean_squared_error(y_true, y_pred)
else:
return mean_squared_error(y_true, y_pred)
I keep getting tensor errors (see below) when trying to run this. Any help/advice will be appreciated!
InvalidArgumentError: You must feed a value for placeholder tensor 'dense_63_target' with dtype float and shape [?,?]
[[Node: dense_63_target = Placeholder[dtype=DT_FLOAT, shape=[?,?], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
You should instead multiply simply by a mask in order to get your desired function
import keras.backend as K
def custom_1loss(y_true, y_pred):
const = 2
mask = K.less(y_pred, y_true) #i.e. y_pred - y_true < 0
return (const - 1) * mask * mean_squared_error(y_true, y_pred) + mean_squared_error(y_true, y_pred)
which has the same desired output as when y_pred is an under-prediction, another MSE term is added. You may have to cast the mask to an integer tensor - I do not remember what specific types - but it would be a minor change.
Also as unsolicited advice to your approach in general. I think you would get better results with a different approach to loss.
import keras.backend as K
def custom_loss2(y_true, y_pred):
beta = 0.1
return mean_squared_error(y_true, y_pred) + beta*K.mean(y_true - y_pred)
observe the difference in gradient behavior:
https://www.desmos.com/calculator/uubwgdhpi6
the second loss function I show you shifts the moment of the local minimum to be a minor over prediction rather than an under prediction (based on what you want). The loss function you give still locally optimizes to mean 0 but with different strength gradients. This will most likely result in simply a slower convergence to the same result as MSE rather than desiring a model that would rather over-predict then under predict. I hope this makes sense.
I would like to create a custom loss function that has a weight term that's updated based on what epoch I'm in.
For example:
Let's say I have a loss function which has a beta weight, where beta increases over the first 20 epochs...
def custom_loss(x, x_pred):
loss1 = objectives.binary_crossentropy(x, x_pred)
loss2 = objectives.mse(x, x_pred)
return (beta*current_epoch/20) * loss1 + loss2
How could I implement something like this into a keras loss function?
Looking at their documentation they mention that you can use theano/Tf symbolic functions that return a scalar for each data point.
So you could do something like this
loss = tf.contrib.losses.softmax_cross_entropy(x, x_pred) *
(beta * current_epoch / 20 ) +
tf.contrib.losses.mean_squared_error
You would have to pass x and x_pred as x and x_pred as tf.placeholders
I think for model creation you could use keras but then again you would have to run the computational graph with sess.run()
References:
https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html#using-keras-models-with-tensorflow