2D sparse input tensorflow - python

I try to use 2D sparse input with Tensorflow 2.6, a minimal example is:
input1=keras.layers.Input(shape=(3,64), sparse=True)
layer1=keras.layers.Dense(32)(input1)
output1=keras.layers.Dense(32)(layer1)
model = keras.Model(inputs = [input1], outputs = [output1])
model.compile()
model.summary()
However I end up with the following error message:
TypeError: Failed to convert object of type <class 'tensorflow.python.framework.sparse_tensor.SparseTensor'> to Tensor. Contents: SparseTensor(indices=Tensor("Placeholder_1:0", shape=(None, 3), dtype=int64), values=Tensor("Placeholder:0", shape=(None,), dtype=float32), dense_shape=Tensor("PlaceholderWithDefault:0", shape=(3,), dtype=int64)). Consider casting elements to a supported type.
What am I doing wrong ? it works if I flatten the matrix.

Edited code:
import tensorflow as tf
input1 = tf.keras.layers.Input(shape=(3,), sparse=True)
layer1 = tf.keras.layers.Dense(32)(input1)
output1= tf.keras.layers.Dense(32)(layer1)
model = tf.keras.Model(inputs = [input1], outputs = [output1])
model.compile()
model.summary()
Reference:
https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/sparse_tensor.ipynb#scrollTo=E8za5DK8vfo7

Related

Tensorflow layer working outside of model but not inside

I have a custom tensorflow layer which works fine by generating an output but it throws an error when used with the Keras functional model API. Here is the code:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input
# ------ Custom Layer -----------
class CustomLayer(tf.keras.layers.Layer):
def __init__(self):
super(CustomLayer, self).__init__()
def split_heads(self, x):
batch_size = x.shape[0]
split_inputs = tf.reshape(x, (batch_size, -1, 3, 1))
return split_inputs
def call(self, q):
qs = self.split_heads(q)
return qs
# ------ Testing Layer with sample data --------
x = np.random.rand(1,2,3)
values_emb = CustomLayer()(x)
print(values_emb)
This generates the following output:
tf.Tensor(
[[[[0.7148978 ]
[0.3997009 ]
[0.11451813]]
[[0.69927174]
[0.71329576]
[0.6588452 ]]]], shape=(1, 2, 3, 1), dtype=float32)
But when I use it in the Keras functional API it doesn't work. Here is the code:
x = Input(shape=(2,3))
values_emb = CustomLayer()(x)
model = Model(x, values_emb)
model.summary()
It gives this error:
TypeError: Failed to convert elements of (None, -1, 3, 1) to Tensor. Consider casting elements to a supported type. See https://www.tensorflow.org/api_docs/python/tf/dtypes for supported TF dtypes.
Does anyone know why this happens and how it can be fixed?
I think you should maybe try using tf.shape in your custom layer, since it will give you the dynamic shape of a tensor:
batch_size = tf.shape(x)[0]

BERT embeddings in LSTM model error in fit function

I am novice in TensorFlow
I am traying to use BERT embeddings in LSTM model
this is my model function
def bert_tweets_model():
Bertmodel = TFAutoModel.from_pretrained(model_name,output_hidden_states=True)
input_word_ids = tf.keras.Input(shape=(max_length,), dtype=tf.int32, name="input_ids")
input_masks_in = tf.keras.Input(shape=(max_length,), name='masked_token', dtype='int32')
with torch.no_grad():
last_hidden_states = Bertmodel(input_word_ids, attention_mask=input_masks_in)[0]
x = tf.keras.layers.LSTM(100, dropout=0.1, activation='relu',recurrent_dropout=0.3,return_sequences = True)(last_hidden_states)
x = tf.keras.layers.LSTM(50, dropout=0.1,activation='relu', recurrent_dropout=0.3,return_sequences = True)(x)
x=tf.keras.layers.Flatten()(x)
output = tf.keras.layers.Dense(units = 2, activation='sigmoid')(x)
model = tf.keras.Model(inputs=[input_word_ids, input_masks_in], outputs = output)
return model
with strategy.scope():
model = bert_tweets_model()
adam_optimizer = tf.keras.optimizers.Adam(learning_rate=1e-5)
model.compile(loss='binary_crossentropy',optimizer=adam_optimizer,metrics=['accuracy'])
model.summary()
validation_data=[dev_encoded, y_val]
train2=[input_id, attention_mask]
history = model.fit(
x=train2, y=y_train, batch_size=batch_size,
epochs=3,
validation_data=validation_data,
verbose=2)
I recieved this error in fit function when I tried to input data
"ValueError: Layer "model_1" expects 2 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 512) dtype=int32>]"
also,I received these warning massages I do not know what is means.
WARNING:tensorflow:Layer lstm_2 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_3 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
can someone help me, thanks in advance.
Regenerating your error
_input1 = tf.random.uniform((1,100), 0 , 10)
_input2 = tf.random.uniform((1,100), 0 , 10)
model(_input1, _input2)
After running this code I am getting the same error...
Layer "model" expects 2 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor: shape=(1, 100), ...
#Now, the problem is you have to enclose the inputs in the set or list then you have to pass the inputs to the model like this
model((_input1, _input2))
<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[0.5324366, 0.3743334]], dtype=float32)>
Remember: if you are using tf.data.Dataset then encolse it then while making the dataset enclose the dataset within the set like this
tf.data.Dataset.from_tensor_slices((words_id, words_mask))
Second Problem as you asked
The warning you are getting because, you should be aware that LSTM doesn't run in CUDA GPU it uses the CPU only therefore it is slow, so TensorFlow is just telling you that LSTM will not run under GPU or parallel computing.

Error when trying to compile a keras model ValueError: Tensor conversion requested dtype int64 for Tensor with dtype float32

I am trying to make a speech recognizer using tensorflow and am getting this error that
ValueError: Tensor conversion requested dtype int64 for Tensor with
dtype float32: 'Tensor("loss_2/lambda_loss/ExpandDims:0", shape=(?,
1), dtype=float32)'
and
TypeError: Input 'y' of 'Mul' Op has type float32 that does not match
type int64 of argument 'x'.
Minimum Running Code where tensorflow is imported as tf
train_input_val = tf.keras.layers.Input(name='the_input', shape=[None,num_features], dtype='float32')
seq_len = tf.keras.layers.Input(name='input_length', shape=[1], dtype='int64')
y_pred = tf.keras.layers.Lambda(ctc_decode_func ,name = 'lambda')([train_input_val, seq_len])
model2 = tf.keras.Model(inputs=[train_input_val,seq_len],outputs=y_pred)
model2.compile(loss={'lambda': lambda y_true, y_pred: y_true}, optimizer='adam')
and ctc_decode_func is defined as
def ctc_decode_func(args):
y_pred, seq_len = args
y_pred,log_prob = tf.keras.backend.ctc_decode(y_pred, tf.squeeze(seq_len))
return y_pred
I tried casting everything to int64 , but still the error persisted. I dont Even know which part of lambda layer is throwing the error.
Please Help

Wrap tensorflow function in keras layer

i'm trying to use the tensorflow unique function (https://www.tensorflow.org/api_docs/python/tf/unique) in a keras lambda layer.
Code below:
def unique_idx(x):
output = tf.unique(x)
return output[1]
then
inp1 = Input(batch_shape(None, 1))
idx = Lambda(unique_idx)(inp1)
model = Model(inputs=inp1, outputs=idx)
when I now use **model.compile(optimizer='Adam', loss='mean_squared_error')**
I get the error:
ValueError: Tensor conversion requested dtype int32 for Tensor with
dtype float32: 'Tensor("lambda_9_sample_weights_1:0", shape=(?,),
dtype=float32)'
Does anybody know whats the error here or a different way of using the tensorflow function?
A keras model expects a float32 as output, but the indices returned from tf.unique is a int32. A casting fixes your problem.
Another issue is that unique expects a flatten array. reshape fixes this one.
import tensorflow as tf
from keras import Input
from keras.layers import Lambda
from keras.engine import Model
def unique_idx(x):
x = tf.reshape(x, [-1])
u, indices = tf.unique(x)
return tf.cast(indices, tf.float32)
x = Input(shape=(1,))
y = Lambda(unique_idx)(x)
model = Model(inputs=x, outputs=y)
model.compile(optimizer='adam', loss='mse')

How to build a tensor from 2 scalars in Tensorflow?

I have two scalars resulting from the following operations:
a = tf.reduce_sum(tensor1), b = tf.matmul(tf.transpose(tensor2), tensor3) this is a dot product since tensor2 and tensor3 have the same dimensions (1-D vectors). Since these tensors have shape [None, dim1] it becomes difficult to deal with the shapes.
I want to build a tensor that has shape (2,1) using a and b.
I tried tf.Tensor([a,b], dtype=tf.float64, value_index=0) but raises the error
TypeError: op needs to be an Operation: [<tf.Tensor 'Sum_5:0' shape=() dtype=float32>, <tf.Tensor 'MatMul_67:0' shape=(?, ?) dtype=float32>]
Any easier way to build that tensor/vector?
This would do probably. Change axis based on what you need
a = tf.constant(1)
b = tf.constant(2)
c = tf.stack([a,b],axis=0)
Output:
array([[1],
[2]], dtype=int32)
You can use concat or stack to achieve this:
import tensorflow as tf
t1 = tf.constant([1])
t2 = tf.constant([2])
c = tf.reshape(tf.concat([t1, t2], 0), (2, 1))
with tf.Session() as sess:
print sess.run(c)
In a similar way you can achieve it with tf.stack.

Categories

Resources