Tensorflow : Shape mismatch issue with one dimensional data - python

I am trying to pass x_data as feed_dict but getting below error, I am not sure that what is wrong in the code.
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'x_12' with dtype int32 and shape [1000]
[[Node: x_12 = Placeholder[dtype=DT_INT32, shape=[1000], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
My Code:
import tensorflow as tf
import numpy as np
model = tf.global_variables_initializer()
#define x and y
x = tf.placeholder(shape=[1000],dtype=tf.int32,name="x")
y = tf.Variable(5*x**2-3*x+15,name = "y")
x_data = tf.pack(np.random.randint(0,100,size=1000))
print(x_data)
print(x)
with tf.Session() as sess:
sess.run(model)
print(sess.run(y,feed_dict={x:x_data}))
I checked the shape of the x and x_data and it is same
Tensor("pack_8:0", shape=(1000,), dtype=int32)
Tensor("x_14:0", shape=(1000,), dtype=int32)
I am working with one dimensional data.
Any help is appreciated, Thanks!

To make it work I have changed two things, first I changed y to be a Tensor. And secondly I have not changed the x_data to Tensor, as commented here:
The optional feed_dict argument allows the caller to override the value of tensors in the graph. Each key in feed_dict can be one of the following types:
If the key is a Tensor, the value may be a Python scalar, string, list, or numpy ndarray that can be converted to the same dtype as that tensor. Additionally, if the key is a placeholder, the shape of the value will be checked for compatibility with the placeholder.
The changed code which works for me:
import tensorflow as tf
import numpy as np
model = tf.global_variables_initializer()
#define x and y
x = tf.placeholder(shape=[1000],dtype=tf.int32,name="x")
y = 5*x**2-3*x+15 # without tf.Variable, making it a tf.Tensor
x_data = np.random.randint(0,100,size=1000) # without tf.pack
print(x_data)
print(x)
with tf.Session() as sess:
sess.run(model)
print(sess.run(y,feed_dict={x:x_data}))

Related

How to convert tensorflow variable to numpy array

I am trying to create a model graph where my input is tensorflow variable which I am inputting from my java program
In my code, I am using numpy methods where I need to convert my tensorflow variable input to numpy array input
Here, is my code snippet
import tensorflow as tf
import numpy as np
eps = np.finfo(float).eps
EXPORT_DIR = './model'
def standardize(x):
med0 = np.median(x)
mad0 = np.median(np.abs(x - med0))
x1 = (x - med0) / (mad0 + eps)
return x1
#tensorflow input variable
a = tf.placeholder(tf.float32, name="input")
with tf.Session() as session:
session.run(tf.global_variables_initializer())
#Converting the input variable to numpy array
tensor = a.eval()
#calling standardize method
numpyArray = standardize(tensor)
#converting numpy array to tf
tf.convert_to_tensor(numpyArray)
#creating graph
graph = tf.get_default_graph()
tf.train.write_graph(graph, EXPORT_DIR, 'model_graph.pb', as_text=False)
I am getting error: InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'input' with dtype float in line tensor = a.eval()
When I am giving constant value in place of placeholder then it's working and generating the graph. But I want to input from my java code.
Is there any way to do that or do I need to convert all my numpy methods to tensorflow methods
placeholder is just an empty variable in tensorflow, to which you can feed numpy values. Now, what you are trying to do does not make sense. You can not get value out of an empty variable.
If you want to standardize your tensor, why convert it to numpy var first? You can directly do this using tensorflow.
The following taken from this stackoverflow ans
def get_median(v):
v = tf.reshape(v, [-1])
m = v.get_shape()[0]//2
return tf.nn.top_k(v, m).values[m-1]
Now, you can implement your function as
def standardize(x):
med0 = get_median(x)
mad0 = get_median(tf.abs(x - med0))
x1 = (x - med0)/(mad0 + eps)
return x1

Tensorflow: Can not convert a function into a Tensor or Operation

I have read through previous strings. My data are in the form of an array fed to a placeholder. Trying to convert the data to a tensor before feeding produces a different (inverse) error message. Other solutions similarly do not seem to work in this situation. Here is minimal code.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from tensorflow.contrib.factorization import KMeans
X = tf.placeholder(tf.float32, shape=[None, 10], name="X")
data = np.random.randn(2,10)
def lump(X):
# Build KMeans graph
kmeans = KMeans(inputs=X, num_clusters=k, distance_metric='cosine',
use_mini_batch=True)
(all_scores, cluster_idx, scores, cluster_centers_initialized, cluster_centers_var, init_op,
train_op) = kmeans.training_graph()
cluster_idx = cluster_idx[0] # fix for cluster_idx being a tuple
avg_distance = tf.reduce_mean(scores)
return cluster_idx, scores
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
idx, d = sess.run(lump,feed_dict={X: data})
Correct, you can't evaluate just lump, because it's a function (returning tensors), not a tensor or an op. You probably meant to do something like this:
cluster_idx, scores = lump(X)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
idx, d = sess.run([cluster_idx, scores], feed_dict={X: data})
Note that lump() is invoked before tf.global_variables_initializer(), because it defines new variables in the graph, so they must be initialized.
The code still fails, because lump is clearly not finished and has issues with dimensions, but it is the right way to evaluate something in a session.

Feedholder shape mismatch in tensorflow

I get a mismatch of shapes between input and the feedholder even though i am pretty sure that the shapes in both the cases are same. Here's the code:
ex3data1.mat contains a 5000*400 matrix X.
import tensorflow as tf
import numpy as np
`import scipy.io as sio
theta1 = sio.loadmat('ex3weights.mat')['Theta1']
theta2 = sio.loadmat('ex3weights.mat')['Theta2']
x = tf.placeholder(tf.float64, shape=[1, 400])
x2 = tf.concat([[[1]] ,x], 1)
z2 = tf.matmul(x2,np.transpose(theta1))
h1 = tf.divide(1.0, (1.0 + tf.exp(-z1)))
h1= tf.concat([[[1]],h1], 1)
z2 = tf.matmul(h1, np.transpose(theta2))
max = tf.argmax(z2)
max = max+1
sess = tf.Session()
op = sio.loadmat('ex3data1.mat')['X'][1234]
op = np.reshape(op, [1, 400])
op.astype(np.float64)
m = {x:op}
sess.run(max,feed_dict=m)
I get the following error:
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_2' with dtype double and shape [1,400]
[[Node: Placeholder_2 = Placeholder[dtype=DT_DOUBLE, shape=[1,400], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Why don't you
print(op.shape)
before assigning the dict to m? I suppose this is [5000, 400]. Reshape works when the number of elements does not change. But in case when you expect 'input_width' wide stream, you may define a placeholder this way:
x = tf.placeholder(tf.float64, [None, input_width], "network_input")
allowing the numer of cases to be flexible. Then you can feed it with any number of cases, like 5000 and the math will still work.

Wrap tensorflow function in keras layer

i'm trying to use the tensorflow unique function (https://www.tensorflow.org/api_docs/python/tf/unique) in a keras lambda layer.
Code below:
def unique_idx(x):
output = tf.unique(x)
return output[1]
then
inp1 = Input(batch_shape(None, 1))
idx = Lambda(unique_idx)(inp1)
model = Model(inputs=inp1, outputs=idx)
when I now use **model.compile(optimizer='Adam', loss='mean_squared_error')**
I get the error:
ValueError: Tensor conversion requested dtype int32 for Tensor with
dtype float32: 'Tensor("lambda_9_sample_weights_1:0", shape=(?,),
dtype=float32)'
Does anybody know whats the error here or a different way of using the tensorflow function?
A keras model expects a float32 as output, but the indices returned from tf.unique is a int32. A casting fixes your problem.
Another issue is that unique expects a flatten array. reshape fixes this one.
import tensorflow as tf
from keras import Input
from keras.layers import Lambda
from keras.engine import Model
def unique_idx(x):
x = tf.reshape(x, [-1])
u, indices = tf.unique(x)
return tf.cast(indices, tf.float32)
x = Input(shape=(1,))
y = Lambda(unique_idx)(x)
model = Model(inputs=x, outputs=y)
model.compile(optimizer='adam', loss='mse')

Variables with dynamic shape TensorFlow

I need to create a matrix in TensorFlow to store some values. The trick is the matrix has to support dynamic shape.
I am trying to do the same I would do in numpy:
myVar = tf.Variable(tf.zeros((x,y), validate_shape=False)
where x=(?) and y=2. But this does not work because zeros does not support 'partially known TensorShape', so, How should I do this in TensorFlow?
1) You could use tf.fill(dims, value=0.0) which works with dynamic shapes.
2) You could use a placeholder for the variable dimension, like e.g.:
m = tf.placeholder(tf.int32, shape=[])
x = tf.zeros(shape=[m])
with tf.Session() as sess:
print(sess.run(x, feed_dict={m: 5}))
If you know the shape out of the session, this could help.
import tensorflow as tf
import numpy as np
v = tf.Variable([], validate_shape=False)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(v, feed_dict={v: np.zeros((3,4))}))
print(sess.run(v, feed_dict={v: np.zeros((2,2))}))

Categories

Resources