I am building sample Neural Network using Pycharm, tensorflow 2.4 and python v3.8.5. When Running this command:
X = tf.placeholder("float", [None, num_input]) #num_input is the sized of input vector
I get an error like this one:
raise TypeError("Error converting %s to a TensorShape: %s." % (arg_name, e))
TypeError: Error converting shape to a TensorShape: Dimension value must be integer or None or have an __index__ method, got value '8.0' with type '<class 'numpy.float64'>'.
What is the problem of that error? thanks in advance
This row (tf.placeholder) is enable for tensorflow 1, but you have installed tensorflow2. Disable Tf2 and run your backend on Tf1 .
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
X = tf.placeholder("float", [None, 8])
print(X)
<tf.Tensor 'Placeholder:0' shape=(?, 8) dtype=float32>
Related
I want to exchange Tensor a to a NumPy two-dimensional array for getting a heat map.
(Tensor(1, 64, 64, 1) -> numpy (64, 64))
I tried x.numpy(), but it doesn't work. The error message is this:
AttributeError: in user code: :187 call * x_np=x.numpy() /usr/local/lib/python3.7/dist packages/tensorflow/python/framework/ops.py:401 getattr self.getattribute(name) AttributeError: 'Tensor' object has no attribute 'numpy'
This is the tensor information and type of the tensor:
Tensor("ExpandDims:0", shape=(1, 64, 64, 1), dtype=float32)
<class 'tensorflow.python.framework.ops.Tensor'>
and I don't know what ExpandDims:0 means (what is this argv?).
Tensorflow:2.6.0
Numpy:1.21.2
I made the test code that has the same Tensor type, shape, and env, and it works.
What should I do?
import tensorflow as tf
import numpy
print(tf.__version__)
print(numpy.__version__)
a = tf.constant([[[[0.1]*1]*64]*64])
print(a)
print(a.numpy())
I try to use 2D sparse input with Tensorflow 2.6, a minimal example is:
input1=keras.layers.Input(shape=(3,64), sparse=True)
layer1=keras.layers.Dense(32)(input1)
output1=keras.layers.Dense(32)(layer1)
model = keras.Model(inputs = [input1], outputs = [output1])
model.compile()
model.summary()
However I end up with the following error message:
TypeError: Failed to convert object of type <class 'tensorflow.python.framework.sparse_tensor.SparseTensor'> to Tensor. Contents: SparseTensor(indices=Tensor("Placeholder_1:0", shape=(None, 3), dtype=int64), values=Tensor("Placeholder:0", shape=(None,), dtype=float32), dense_shape=Tensor("PlaceholderWithDefault:0", shape=(3,), dtype=int64)). Consider casting elements to a supported type.
What am I doing wrong ? it works if I flatten the matrix.
Edited code:
import tensorflow as tf
input1 = tf.keras.layers.Input(shape=(3,), sparse=True)
layer1 = tf.keras.layers.Dense(32)(input1)
output1= tf.keras.layers.Dense(32)(layer1)
model = tf.keras.Model(inputs = [input1], outputs = [output1])
model.compile()
model.summary()
Reference:
https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/sparse_tensor.ipynb#scrollTo=E8za5DK8vfo7
I am trying to make a speech recognizer using tensorflow and am getting this error that
ValueError: Tensor conversion requested dtype int64 for Tensor with
dtype float32: 'Tensor("loss_2/lambda_loss/ExpandDims:0", shape=(?,
1), dtype=float32)'
and
TypeError: Input 'y' of 'Mul' Op has type float32 that does not match
type int64 of argument 'x'.
Minimum Running Code where tensorflow is imported as tf
train_input_val = tf.keras.layers.Input(name='the_input', shape=[None,num_features], dtype='float32')
seq_len = tf.keras.layers.Input(name='input_length', shape=[1], dtype='int64')
y_pred = tf.keras.layers.Lambda(ctc_decode_func ,name = 'lambda')([train_input_val, seq_len])
model2 = tf.keras.Model(inputs=[train_input_val,seq_len],outputs=y_pred)
model2.compile(loss={'lambda': lambda y_true, y_pred: y_true}, optimizer='adam')
and ctc_decode_func is defined as
def ctc_decode_func(args):
y_pred, seq_len = args
y_pred,log_prob = tf.keras.backend.ctc_decode(y_pred, tf.squeeze(seq_len))
return y_pred
I tried casting everything to int64 , but still the error persisted. I dont Even know which part of lambda layer is throwing the error.
Please Help
i'm trying to use the tensorflow unique function (https://www.tensorflow.org/api_docs/python/tf/unique) in a keras lambda layer.
Code below:
def unique_idx(x):
output = tf.unique(x)
return output[1]
then
inp1 = Input(batch_shape(None, 1))
idx = Lambda(unique_idx)(inp1)
model = Model(inputs=inp1, outputs=idx)
when I now use **model.compile(optimizer='Adam', loss='mean_squared_error')**
I get the error:
ValueError: Tensor conversion requested dtype int32 for Tensor with
dtype float32: 'Tensor("lambda_9_sample_weights_1:0", shape=(?,),
dtype=float32)'
Does anybody know whats the error here or a different way of using the tensorflow function?
A keras model expects a float32 as output, but the indices returned from tf.unique is a int32. A casting fixes your problem.
Another issue is that unique expects a flatten array. reshape fixes this one.
import tensorflow as tf
from keras import Input
from keras.layers import Lambda
from keras.engine import Model
def unique_idx(x):
x = tf.reshape(x, [-1])
u, indices = tf.unique(x)
return tf.cast(indices, tf.float32)
x = Input(shape=(1,))
y = Lambda(unique_idx)(x)
model = Model(inputs=x, outputs=y)
model.compile(optimizer='adam', loss='mse')
I'm trying to set up a sequential RNN in Tensorflow with seq2seq.rnn_decoder(). The input that rnn_decoder() wants is a list of tensors, so to generate this I've passed in a rank-3 tensor and used tf.unpack() to make it into a list. The problem arises when the float32 array that I pass in in turned into a float64 tensor by tf.unpack(), making it incompatible with the rest of the model. Here's the code I put together to convince me that the culprit is tf.unpack():
inputDat = loader.getSequential(BATCH_SIZE)
print(inputDat.shape)
output (BATCH_SIZE is five, sequence length is ten):
(10, 5, 3)
Then I can load this data in a Tensorflow session:
sess = tf.InteractiveSession()
input_tensor = tf.constant(inputDat.astype('float32'), dtype=tf.float32)
print "Input tensor type: " + str(type(input_tensor.eval()[0,0,0]))
input_tensor = tf.unpack(inputDat)
print "Input tensor shape: " + str(len(input_tensor)) + "x" + str(input_tensor[0].eval().shape)
print "Input tensor type: " + str(type(input_tensor[0].eval()[0,0]))
Output:
Input tensor type: <type 'numpy.float32'>
Input tensor shape: 10x(5, 3)
Input tensor type: <type 'numpy.float64'>
What's going on here? Using a FOR loop to iterate through each of the sequential entries and re-cast it seems like the wrong way to do this, and I can't find a method inside Tensorflow to cast every member of a list.
You don't need a for-loop: you can use tf.cast().
Example:
input_tensor = tf.unpack(inputDat) # result is 64-bit
input_tensor = tf.cast(input_tensor, tf.float32) # now it's 32-bit