Wrap tensorflow function in keras layer - python

i'm trying to use the tensorflow unique function (https://www.tensorflow.org/api_docs/python/tf/unique) in a keras lambda layer.
Code below:
def unique_idx(x):
output = tf.unique(x)
return output[1]
then
inp1 = Input(batch_shape(None, 1))
idx = Lambda(unique_idx)(inp1)
model = Model(inputs=inp1, outputs=idx)
when I now use **model.compile(optimizer='Adam', loss='mean_squared_error')**
I get the error:
ValueError: Tensor conversion requested dtype int32 for Tensor with
dtype float32: 'Tensor("lambda_9_sample_weights_1:0", shape=(?,),
dtype=float32)'
Does anybody know whats the error here or a different way of using the tensorflow function?

A keras model expects a float32 as output, but the indices returned from tf.unique is a int32. A casting fixes your problem.
Another issue is that unique expects a flatten array. reshape fixes this one.
import tensorflow as tf
from keras import Input
from keras.layers import Lambda
from keras.engine import Model
def unique_idx(x):
x = tf.reshape(x, [-1])
u, indices = tf.unique(x)
return tf.cast(indices, tf.float32)
x = Input(shape=(1,))
y = Lambda(unique_idx)(x)
model = Model(inputs=x, outputs=y)
model.compile(optimizer='adam', loss='mse')

Related

Tensorflow layer working outside of model but not inside

I have a custom tensorflow layer which works fine by generating an output but it throws an error when used with the Keras functional model API. Here is the code:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input
# ------ Custom Layer -----------
class CustomLayer(tf.keras.layers.Layer):
def __init__(self):
super(CustomLayer, self).__init__()
def split_heads(self, x):
batch_size = x.shape[0]
split_inputs = tf.reshape(x, (batch_size, -1, 3, 1))
return split_inputs
def call(self, q):
qs = self.split_heads(q)
return qs
# ------ Testing Layer with sample data --------
x = np.random.rand(1,2,3)
values_emb = CustomLayer()(x)
print(values_emb)
This generates the following output:
tf.Tensor(
[[[[0.7148978 ]
[0.3997009 ]
[0.11451813]]
[[0.69927174]
[0.71329576]
[0.6588452 ]]]], shape=(1, 2, 3, 1), dtype=float32)
But when I use it in the Keras functional API it doesn't work. Here is the code:
x = Input(shape=(2,3))
values_emb = CustomLayer()(x)
model = Model(x, values_emb)
model.summary()
It gives this error:
TypeError: Failed to convert elements of (None, -1, 3, 1) to Tensor. Consider casting elements to a supported type. See https://www.tensorflow.org/api_docs/python/tf/dtypes for supported TF dtypes.
Does anyone know why this happens and how it can be fixed?
I think you should maybe try using tf.shape in your custom layer, since it will give you the dynamic shape of a tensor:
batch_size = tf.shape(x)[0]

How to pass the input tensor of a model to a loss function?

My goal is to create a custom loss function that calculates the loss based on y_true, y_pred and the tensor of the models input layer:
import numpy as np
from tensorflow import keras as K
input_shape = (16, 16, 1)
input = K.layers.Input(input_shape)
dense = K.layers.Dense(16)(input)
output = K.layers.Dense(1)(dense)
model = K.Model(inputs=input, outputs=output)
def CustomLoss(y_true, y_pred):
return K.backend.sum(K.backend.abs(y_true - model.input * y_pred))
model.compile(loss=CustomLoss)
model.fit(np.ones(input_shape), np.zeros(input_shape))
However, this code fails with the following error message:
TypeError: Cannot convert a symbolic Keras input/output to a numpy array. This error may indicate that you're trying to pass a symbolic value to a NumPy call, which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model.
How can I pass the input tensor of my model to the loss function?
Tensorflow Version: 2.4.1
Python Version: 3.8.8
You can use add_loss to pass external layers to your loss. Here an example:
import numpy as np
from tensorflow import keras as K
def CustomLoss(y_true, y_pred, input_l):
return K.backend.sum(K.backend.abs(y_true - input_l * y_pred))
input_shape = (16, 16, 1)
n_sample = 10
X = np.random.uniform(0,1, (n_sample,) + input_shape)
y = np.random.uniform(0,1, (n_sample,) + input_shape)
inp = K.layers.Input(input_shape)
dense = K.layers.Dense(16)(inp)
out = K.layers.Dense(1)(dense)
target = K.layers.Input(input_shape)
model = K.Model(inputs=[inp,target], outputs=out)
model.add_loss( CustomLoss( target, out, inp ) )
model.compile(loss=None, optimizer='adam')
model.fit(x=[X,y], y=None, epochs=3)
To use the model in inference mode (removing the target from inputs)
final_model = K.Model(model.input[0], model.output)
final_model.predict(X)

Tensorflow - ValueError: Output tensors to a Model must be the output of a TensorFlow `Layer`

I have created an RNN with the Keras functional API in TensorFlow 2.0 where the following piece of code workes
sum_input = keras.Input(shape=(UNIT_SIZE, 256,), name='sum')
x = tf.unstack(sum_input,axis=2, num=256)
t_sum = x[0]
for i in range(len(x) - 1):
t_sum = keras.layers.Add()([t_sum, x[i+1]])
sum_m = keras.Model(inputs=sum_input, outputs=t_sum, name='sum_model')
I then had to changed to Tensorflow 1.13 which gives me the following error
ValueError: Output tensors to a Model must be the output of a TensorFlow `Layer` (thus holding past layer metadata). Found: Tensor("add_254/add:0", shape=(?, 40), dtype=float32)
I don't understand why the output tensor is not from a Tensorflow layer, since t_sum is the output from keras.layers.Add.
I have tried to wrap parts of the code into keras.layers.Lambda as suggested in ValueError: Output tensors to a Model must be the output of a TensorFlow Layer
, but it doesn't seem to work for me.
The problem is not with Add() layer but with tf.unstack() - it is not an instance of keras.layers.Layer(). You can just wrap it up as custom layer:
import tensorflow as tf
class Unstack(tf.keras.layers.Layer):
def __init__(self):
super(Unstack, self).__init__()
def call(self, inputs, num=256):
return tf.unstack(inputs, axis=2, num=num)
x = Unstack()(sum_input)
or, instead of subclassing, you can do it using Lambda layer:
x = tf.keras.layers.Lambda(lambda t: tf.unstack(t, axis=2, num=256))(sum_input)

Convert a tensor from 128,128,3 to 129,128,3 and the 1,128,3 values padded to that tensor happens later

This is my piece of code for GAN where the model is being initialized, everything is working and only the relevant code to the problem is present here:
z = Input(shape=(100+384,))
img = self.generator(z)
print("before: ",img) #128x128x3 shape, dtype=tf.float32
temp = tf.get_variable("temp", [1, 128, 3],dtype=tf.float32)
img=tf.concat(img,temp)
print("after: ",img) #error ValueError: Incompatible type conversion requested to type 'int32' for variable of type 'float32_ref'
valid = self.discriminator(img)
self.combined = Model(z, valid)
I have 128x128x3 images to generate, what I want to do is give 129x128x3 images to discriminator and the 1x128x3 text-embedding matrix is concatenated with the image while training. But I have to specify at the start the shape of tensors and input value that each model i.e. GEN and DISC will get. Gen takes 100noise+384embedding matrix and generates 128x128x3 image which is again embeded by some embedding i.e. 1x128x3 and is fed to DISC. So my question is that whether this approach is correct or not? Also, if it is correct or it makes sense then how can I specific the stuff needed at the start so that it does not give me errors like incompatible shape because at the start I have to add these lines:-
z = Input(shape=(100+384,))
img = self.generator(z) #128x128x3
valid = self.discriminator(img) #should be 129x128x3
self.combined = Model(z, valid)
But img is of 128x128x3 and is later during training changed to 129x128x3 by concatenating embedding matrix. So how can I change "img" from 128,128,3 to 129,128,3 in the above code either by padding or appending another tensor or by simply reshaping which of course is not possible. Any help will be much much appreciated. Thanks.
The first argument of tf.concat should be the list of tensors, while the second is the axis along which to concatenate. You could concatenate the img and temp tensors as follows:
import tensorflow as tf
img = tf.ones(shape=(128, 128, 3))
temp = tf.get_variable("temp", [1, 128, 3], dtype=tf.float32)
img = tf.concat([img, temp], axis=0)
with tf.Session() as sess:
print(sess.run(tf.shape(img)))
UPDATE: Here you have a minimal example showing why you get the error "AttributeError: 'Tensor' object has no attribute '_keras_history'". This error pops up in the following snippet:
from keras.layers import Input, Lambda, Dense
from keras.models import Model
import tensorflow as tf
img = Input(shape=(128, 128, 3)) # Shape=(batch_size, 128, 128, 3)
temp = Input(shape=(1, 128, 3)) # Shape=(batch_size, 1, 128, 3)
concat = tf.concat([img, temp], axis=1)
print(concat.get_shape())
dense = Dense(1)(concat)
model = Model(inputs=[img, temp], outputs=dense)
This happens because tensor concatis not a Keras tensor, and therefore some of the typical Keras tensors' attributes (such as _keras_history) are missing. To overcome this problem, you need to encapsulate all TensorFlow tensors into a Keras Lambda layer:
from keras.layers import Input, Lambda, Dense
from keras.models import Model
import tensorflow as tf
img = Input(shape=(128, 128, 3)) # Shape=(batch_size, 128, 128, 3)
temp = Input(shape=(1, 128, 3)) # Shape=(batch_size, 1, 128, 3)
concat = Lambda(lambda x: tf.concat([x[0], x[1]], axis=1))([img, temp])
print(concat.get_shape())
dense = Dense(1)(concat)
model = Model(inputs=[img, temp], outputs=dense)

Custom weighted loss function in Keras for weighing each element

I'm trying to create a simple weighted loss function.
Say, I have input dimensions 100 * 5, and output dimensions also 100 * 5. I also have a weight matrix of the same dimension.
Something like the following:
import numpy as np
train_X = np.random.randn(100, 5)
train_Y = np.random.randn(100, 5)*0.01 + train_X
weights = np.random.randn(*train_X.shape)
Defining the custom loss function
def custom_loss_1(y_true, y_pred):
return K.mean(K.abs(y_true-y_pred)*weights)
Defining the model
from keras.layers import Dense, Input
from keras import Model
import keras.backend as K
input_layer = Input(shape=(5,))
out = Dense(5)(input_layer)
model = Model(input_layer, out)
Testing with existing metrics works fine
model.compile('adam','mean_absolute_error')
model.fit(train_X, train_Y, epochs=1)
Testing with our custom loss function doesn't work
model.compile('adam',custom_loss_1)
model.fit(train_X, train_Y, epochs=10)
It gives the following stack trace:
InvalidArgumentError (see above for traceback): Incompatible shapes: [32,5] vs. [100,5]
[[Node: loss_9/dense_8_loss/mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](loss_9/dense_8_loss/Abs, loss_9/dense_8_loss/mul/y)]]
Where is the number 32 coming from?
Testing a loss function with weights as Keras tensors
def custom_loss_2(y_true, y_pred):
return K.mean(K.abs(y_true-y_pred)*K.ones_like(y_true))
This function seems to do the work. So, probably suggests that a Keras tensor as a weight matrix would work. So, I created another version of the loss function.
Loss function try 3
from functools import partial
def custom_loss_3(y_true, y_pred, weights):
return K.mean(K.abs(y_true-y_pred)*K.variable(weights, dtype=y_true.dtype))
cl3 = partial(custom_loss_3, weights=weights)
Fitting data using cl3 gives the same error as above.
InvalidArgumentError (see above for traceback): Incompatible shapes: [32,5] vs. [100,5]
[[Node: loss_11/dense_8_loss/mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](loss_11/dense_8_loss/Abs, loss_11/dense_8_loss/Variable/read)]]
I wonder what I'm missing! I could have used the notion of sample_weight in Keras; but then I'd have to reshape my inputs to a 3d vector.
I thought that this custom loss function should really have been trivial.
In model.fit the batch size is 32 by default, that's where this number is coming from. Here's what's happening:
In custom_loss_1 the tensor K.abs(y_true-y_pred) has shape (batch_size=32, 5), while the numpy array weights has shape (100, 5). This is an invalid multiplication, since the dimensions don't agree and broadcasting can't be applied.
In custom_loss_2 this problem doesn't exist because you're multiplying 2 tensors with the same shape (batch_size=32, 5).
In custom_loss_3 the problem is the same as in custom_loss_1, because converting weights into a Keras variable doesn't change their shape.
UPDATE: It seems you want to give a different weight to each element in each training sample, so the weights array should have shape (100, 5) indeed.
In this case, I would input your weights' array into your model and then use this tensor within the loss function:
import numpy as np
from keras.layers import Dense, Input
from keras import Model
import keras.backend as K
from functools import partial
def custom_loss_4(y_true, y_pred, weights):
return K.mean(K.abs(y_true - y_pred) * weights)
train_X = np.random.randn(100, 5)
train_Y = np.random.randn(100, 5) * 0.01 + train_X
weights = np.random.randn(*train_X.shape)
input_layer = Input(shape=(5,))
weights_tensor = Input(shape=(5,))
out = Dense(5)(input_layer)
cl4 = partial(custom_loss_4, weights=weights_tensor)
model = Model([input_layer, weights_tensor], out)
model.compile('adam', cl4)
model.fit(x=[train_X, weights], y=train_Y, epochs=10)

Categories

Resources