Tensorflow cost function placeholder error - python

I have the following code trying to optimize for a linear model with two inputs and three parameters (m_1, m_2 and b). Initially, I had issues with importing the data in a way such that the feed_dict would accept them, which I solved by putting it in a numpy array instead.
Now the optimizer function will run smoothly (and the outputs look roughly like it is optimizing the parameters), but as soon as I try to return the cost with the line at the end:
cost_val = sess.run(cost)
It returns the following error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_2' with dtype float and shape [?,1]
[[Node: Placeholder_2 = Placeholder[dtype=DT_FLOAT, shape=[?,1], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
If I comment out that line alone, everything runs smoothly.
I tried changing the cost function from the more complicated one I was using to something simpler, but the error persists. I know this is probably related to the data input shape(?), but can't figure how the data would work for the optimizer but not the cost function.
# reading in data
filename = tf.train.string_input_producer(["file.csv"])
reader = tf.TextLineReader(skip_header_lines=1)
key, value = reader.read(filename)
rec_def = [[1], [1], [1]]
input_1, input_2, col3 = tf.decode_csv(value, record_defaults=rec_def)
# parameters
learning_rate = 0.001
training_steps = 300
x = tf.placeholder(tf.float32, [None,1])
x2 = tf.placeholder(tf.float32, [None,1])
m = tf.Variable(tf.zeros([1,1]))
m2 = tf.Variable(tf.zeros([1,1]))
b = tf.Variable(tf.zeros([1]))
y_ = tf.placeholder(tf.float32, [None,1])
y = tf.matmul(x,m) + tf.matmul(x2,m2) + b
# cost function
# cost = tf.reduce_mean(tf.log(1+tf.exp(-y_*y)))
cost = tf.reduce_sum(tf.pow((y_-y),2))
# Gradient descent optimizer
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
# initializing variables
init = tf.global_variables_initializer()
with tf.Session() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
sess.run(init)
for i in range(training_steps):
xs = np.array([[sess.run(input_1)]])
ys = np.array([[sess.run(input_2)]])
label = np.array([[sess.run(col3)]])
feed = {x:xs, x2:ys, y_:label}
sess.run(optimizer, feed_dict=feed)
cost_val = sess.run(cost)
coord.request_stop()
coord.join(threads)

The cost tensor is a function of the placeholder tensors and this requires them to have a value. Since the call to sess.run(cost) isn't feeding those placeholders, you're seeing the error. (Putting it another way - what values of x and y_ do you want to compute the cost for?)
So you want to change the line:
cost_val = sess.run(cost)
to:
cost_val = sess.run(cost, feed_dict=feed)
Hope that helps.

Related

InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape

I have written the following code in my Pycharm which does Fully Connect Layer (FCL) in Tensorflow. The placeholder happens invalid argument error. So I entered all the dtype, shape, and name in the placeholder, but I still get invalid argument error.
I want to make new Signal(1, 222) through FCL model.
input Signal(1, 222) => output Signal(1, 222)
maxPredict: Find the index with the highest value in the output signal.
calculate Y: Get the frequency array value corresponding to maxPredict.
loss: Use the difference between true Y and calculate Y as a loss.
loss = tf.abs(trueY - calculateY)`
Code (occur Error)
x = tf.placeholder(dtype=tf.float32, shape=[1, 222], name='inputX')
ERROR
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'inputX' with dtype float and shape [1,222]
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'inputX' with dtype float and shape [1,222]
[[{{node inputX}} = Placeholderdtype=DT_FLOAT, shape=[1,222], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
During handling of the above exception, another exception occurred:
New Error Case
I changed my Code.
x = tf.placeholder(tf.float32, [None, 222], name='inputX')
Error Case 1
tensorFreq = tf.convert_to_tensor(basicFreq, tf.float32)
newY = tf.gather(tensorFreq, maxPredict) * 60
loss = tf.abs(y - tf.Variable(newY))
ValueError: initial_value must have a shape specified: Tensor("mul:0", shape=(?,), dtype=float32)
Error Case 2
tensorFreq = tf.convert_to_tensor(basicFreq, tf.float32)
newY = tf.gather(tensorFreq, maxPredict) * 60
loss = tf.abs(y - newY)
Traceback (most recent call last):
File "D:/PycharmProject/DetectionSignal/TEST_FCL_StackOverflow.py", line 127, in
trainStep = opt.minimize(loss)
File "C:\Users\Heewony\Anaconda3\envs\TSFW_pycharm\lib\site-packages\tensorflow\python\training\optimizer.py", line 407, in minimize
([str(v) for _, v in grads_and_vars], loss))
ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables [tf.Variable 'Variable:0' shape=(222, 1024) dtype=float32_ref, tf.Variable 'Variable_1:0' shape=(1024,) dtype=float32_re, ......... tf.Variable 'Variable_5:0' shape=(222,) dtype=float32_ref] and loss Tensor("Abs:0", dtype=float32).
Development environment
OS Platform and Distribution: Windows 10 x64
TensorFlow installed from: Anaconda
Tensorflow version 1.12.0:
python 3.6.7 :
Mobile device: N/A
Exact command to reproduce: N/A
GPU model and memory: NVIDIA GeForce CTX 1080 Ti
CUDA/cuDNN: 9.0/7.4
Model and Function
def Model_FCL(inputX):
data = inputX # input Signals
# Fully Connected Layer 1
flatConvh1 = tf.reshape(data, [-1, 222])
fcW1 = tf.Variable(tf.truncated_normal(shape=[222, 1024], stddev=0.05))
fcb1 = tf.Variable(tf.constant(0.1, shape=[1024]))
fch1 = tf.nn.relu(tf.matmul(flatConvh1, fcW1) + fcb1)
# Fully Connected Layer 2
flatConvh2 = tf.reshape(fch1, [-1, 1024])
fcW2 = tf.Variable(tf.truncated_normal(shape=[1024, 1024], stddev=0.05))
fcb2 = tf.Variable(tf.constant(0.1, shape=[1024]))
fch2 = tf.nn.relu(tf.matmul(flatConvh2, fcW2) + fcb2)
# Output Layer
fcW3 = tf.Variable(tf.truncated_normal(shape=[1024, 222], stddev=0.05))
fcb3 = tf.Variable(tf.constant(0.1, shape=[222]))
logits = tf.add(tf.matmul(fch2, fcW3), fcb3)
predictY = tf.nn.softmax(logits)
return predictY, logits
def loadMatlabData(fileName):
contentsMat = sio.loadmat(fileName)
dataInput = contentsMat['dataInput']
dataLabel = contentsMat['dataLabel']
dataSize = dataInput.shape
dataSize = dataSize[0]
return dataInput, dataLabel, dataSize
def getNextSignal(num, data, labels, WINDOW_SIZE, OUTPUT_SIZE):
shuffleSignal = data[num]
shuffleLabels = labels[num]
# shuffleSignal = shuffleSignal.reshape(1, WINDOW_SIZE)
# shuffleSignal = np.asarray(shuffleSignal, np.float32)
return shuffleSignal, shuffleLabels
def getBasicFrequency():
# basicFreq => shape(222)
basicFreq = np.array([0.598436736688, 0.610649731314, ... 3.297508549096])
return basicFreq
Graph
basicFreq = getBasicFrequency()
myGraph = tf.Graph()
with myGraph.as_default():
# define input data & output data 입력받기 위한 placeholder
x = tf.placeholder(dtype=tf.float32, shape=[1, 222], name='inputX') # Signal size = [1, 222]
y = tf.placeholder(tf.float32, name='trueY') # Float value size = [1]
print('inputzz ', x, y)
print('Graph ', myGraph.get_operations())
print('TrainVariable ', tf.trainable_variables())
predictY, logits = Model_FCL(x) # Predict Signal, size = [1, 222]
maxPredict = tf.argmax(predictY, 1, name='maxPredict') # Find max index of Predict Signal
tensorFreq = tf.convert_to_tensor(basicFreq, tf.float32)
newY = tf.gather(tensorFreq, maxPredict) * 60 # Find the value that corresponds to the Freq array index
loss = tf.abs(y - tf.Variable(newY)) # Calculate absolute (true Y - predict Y)
opt = tf.train.AdamOptimizer(learning_rate=0.0001)
trainStep = opt.minimize(loss)
print('Graph ', myGraph.get_operations())
print('TrainVariable ', tf.trainable_variables())
Session
with tf.Session(graph=myGraph) as sess:
sess.run(tf.global_variables_initializer())
dataFolder = './'
writer = tf.summary.FileWriter('./logMyGraph', sess.graph)
startTime = datetime.datetime.now()
numberSummary = 0
accuracyTotalTrain = []
for trainEpoch in range(1, 25 + 1):
arrayTrain = []
dataPPG, dataLabel, dataSize = loadMatlabData(dataFolder + "TestValues.mat")
for i in range(dataSize):
batchSignal, valueTrue = getNextSignal(i, dataPPG, dataLabel, 222, 222)
_, lossPrint, valuePredict = sess.run([trainStep, loss, newY], feed_dict={x: batchSignal, y: valueTrue})
print('Train ', i, ' ', valueTrue, ' - ', valuePredict, ' Loss ', lossPrint)
arrayTrain.append(lossPrint)
writer.add_summary(tf.Summary(value=[tf.Summary.Value(tag='Loss', simple_value=float(lossPrint))]),
numberSummary)
numberSummary += 1
accuracyTotalTrain.append(np.mean(arrayTrain))
print('Final Train : ', accuracyTotalTrain)
sess.close()
It seems that the variable batchSignal is of a wrong type or shape. It must be a numpy array of shape exactly [1, 222]. If you want to use a batch of examples of size n × 222, the placeholder x should have a shape of [None, 222] and placeholder y shape [None].
By the way, consider using tf.layers.dense instead of explicitly initializing variables and implementing the layers yourself.
There should be two things to change.
Error Case 0. You don't need to reshape your flow between layers. You can use None at the first dimension to pass a dynamic batch size.
Error Case 1. You can use directly your newY as output of the NN. You only use tf.Variable to define weights or bias.
Error Case 2. And it seems that tensorflow doesn't have gradient descent implementation for neither tf.abs() nor tf.gather(). With a regression problem, the mean square error is often sufficient.
Herein, how I rewrite your code. I don't have your matlab part so I can't debug your python/matlab interface:
Model:
def Model_FCL(inputX):
# Fully Connected Layer 1
fcW1 = tf.get_variable('w1', shape=[222, 1024], initializer=tf.initializer.truncated_normal())
fcb1 = tf.get_variable('b1', shape=[222], initializer=tf.initializer.truncated_normal())
# fcb1 = tf.get_variable('b1', shape=[None, 222], trainable=False, initializer=tf.constant_initializer(valueThatYouWant)) # if you want to fix your bias constant
fch1 = tf.nn.relu(tf.matmul(inputX, fcW1) + fcb1, name='relu1')
# Fully Connected Layer 2
fcW2 = tf.get_variable('w2', shape=[1024, 1024], initializer=tf.initializer.truncated_normal())
fcb2 = tf.get_variable('b2', shape=[222], initializer=tf.initializer.truncated_normal())
# fcb2 = tf.get_variable('b2', shape=[None, 222], trainable=False, initializer=tf.constant_initializer(valueThatYouWant)) # if you want to fix your bias constant
fch2 = tf.nn.relu(tf.matmul(fch1, fcW2) + fcb2, name='relu2')
# Output Layer
fcW3 = tf.get_variable('w3', shape=[1024, 222], initializer=tf.initializer.truncated_normal())
fcb3 = tf.get_variable('b3', shape=[222], initializer=tf.initializer.truncated_normal())
# fcb2 = tf.get_variable('b2', shape=[None, 222], trainable=False, initializer=tf.constant_initializer(valueThatYouWant)) # if you want to fix your bias constant
logits = tf.add(tf.matmul(fch2, fcW3), fcb3)
predictY = tf.nn.softmax(logits) #I'm not sure that it will learn if you do softmax then abs/MSE
return predictY, logits
Graph:
with myGraph.as_default():
# define input data & output data 입력받기 위한 placeholder
# put None(dynamic batch size) not -1 at the first dimension so that you can change your batch size
x = tf.placeholder(tf.float32, shape=[None, 222], name='inputX') # Signal size = [1, 222]
y = tf.placeholder(tf.float32, shape=[None], name='trueY') # Float value size = [1]
...
predictY, logits = Model_FCL(x) # Predict Signal, size = [1, 222]
maxPredict = tf.argmax(predictY, 1, name='maxPredict') # Find max index of Predict Signal
tensorFreq = tf.convert_to_tensor(basicFreq, tf.float32)
newY = tf.gather(tensorFreq, maxPredict) * 60 # Find the value that corresponds to the Freq array index
loss = tf.losses.mean_squared_error(labels=y, predictions=newY) # maybe use MSE for regression problem
# loss = tf.abs(y - newY) # Calculate absolute (true Y - predict Y) #tensorflow doesn't have gradient descent implementation for tf.abs
opt = tf.train.AdamOptimizer(learning_rate=0.0001)
trainStep = opt.minimize(loss)
If you are still getting the same error even after feeding the right numpy shape and also maintaining the correct dtypes (np.int32 or np.float32) as suggested by the error message, then the following code should solve your problem:
#this code will print the list of placeholders and other variables declared in the memory which is causing your error
[n.name for n in tf.get_default_graph().as_graph_def().node]
#it will reset your declared placeholders so you can start over
tf.reset_default_graph()
This problem could also be solved by restarting the kernel repeatedly for each debug however it's not feasible.

tensor flow input variable error

I am creating a tensor flow code and get an error when I try to run with variables.
The base code is
import tensor flow as tf
import numpy as np
graph = tf.Graph()
with graph.as_default():
with tf.name_scope("variables"):
# keep track of how many times the model has been run
global_step = tf.Variable(0, dtype=tf.int32, trainable=False, name="global_step")
# keep track of sum of all outputs over time
total_output = tf.Variable(0, dtype=tf.float32, trainable=False, name="total_output")
with tf.name_scope("transformation"):
# separate input layer
with tf.name_scope("input"):
# create input placeholder which takes in a vector
a = tf.placeholder(tf.float32, shape=[None], name = "input_placeholder_A")
#separate the middle layer
with tf.name_scope("middle"):
b = tf.reduce_prod(a, name = "product_b")
c = tf.reduce_sum(a, name = "sum_c")
# separate the output layer
with tf.name_scope("output"):
output = tf.add(b,c, name="output")
# separate the update layer and store the variables
with tf.name_scope("update"):
update_total = total_output.assign(output)
increment_step = global_step.assign_add(1)
# now create namescope summaries and store these in the summary
with tf.name_scope("summaries"):
avg = tf.divide(update_total, tf.cast(increment_step, tf.float32), name = "average")
# create summary for output node
tf.summary.scalar("output_summary", output)
tf.summary.scalar("total_summary",update_total)
tf.summary.scalar("average_summary",avg)
with tf.name_scope("global_ops"):
init = tf.initialize_all_variables()
merged_summaries = tf.summary.merge_all()
sess = tf.Session(graph=graph)
writer = tf.summary.FileWriter('./improved_graph', graph)
sess.run(init)
def run_graph(input_tensor):
feed_dict = {a: input_tensor}
_, step, summary = sess.run([output, increment_step, merged_summaries],feed_dict=feed_dict)
writer.add_summary(summary, global_step=step)
when I try to run the above code
run_graph([2,8])
I get the error
InvalidArgumentError Traceback (most recent call
last) InvalidArgumentError (see above for traceback): You must feed a
value for placeholder tensor
'transformation_2/input/input_placeholder_A' with dtype float and
shape [?][[Node: transformation_2/input/input_placeholder_A =
Placeholderdtype=DT_FLOAT, shape=[?],
_device="/job:localhost/replica:0/task:0/device:CPU:0"]]
I do not understand what I am doing wrong in this since the code is all corrected for the version of tensor flow installed.
Your placeholder a is defined as being of type float32 but [5, 8] contain int values.
run_graph([2., 8.]) or run_graph(np.array([5, 8], dtype=np.float32)) should work.

tensorflow shuffle_batch and feed_dict error

This is the main part of my code.
I'm confused on function shuffle_batch and feed_dict.
In my code below, the features and labels I put into the function are "list".(I also tried "array" before.But it seems doesn't matter.)
What I want to do is make my testing data(6144,26) and training data(1024,13) into batch:(100,26) and (100,13),then set them as the feed_dict for the placeholders.
My questions are:
1.The outputs of the function tf.train.batch_shuffle are Tensors.But I can not put tensors in the feed_dict,right?
2.When I compiled the last two rows,error says,got shape [6144, 26], but wanted [6144] .I know it may be a dimension error,but how can I fix it.
Thanks a lot.
import tensorflow as tf
import scipy.io as sio
#import signal matfile
#[('label', (8192, 13), 'double'), ('clipped_DMT', (8192, 26), 'double')]
file = sio.loadmat('DMTsignal.mat')
#get array(clipped_DMT)
data_cDMT = file['clipped_DMT']
#get array(label)
data_label = file['label']
with tf.variable_scope('split_cDMT'):
cDMT_test_list = []
cDMT_training_list = []
for i in range(0,8192):
if i % 4 == 0:
cDMT_test_list.append(data_cDMT[i])
else:
cDMT_training_list.append(data_cDMT[i])
with tf.variable_scope('split_label'):
label_test_list = []
label_training_list = []
for i in range(0,8192):
if i % 4 == 0:
label_test_list.append(data_label[i])
else:
label_training_list.append(data_label[i])
#set parameters
n_features = cDMT_training.shape[1]
n_labels = label_training.shape[1]
learning_rate = 0.8
hidden_1 = 256
hidden_2 = 128
training_steps = 1000
BATCH_SIZE = 100
#set Graph input
with tf.variable_scope('cDMT_Inputs'):
X = tf.placeholder(tf.float32,[None, n_features],name = 'Input_Data')
with tf.variable_scope('labels_Inputs'):
Y = tf.placeholder(tf.float32,[None, n_labels],name = 'Label_Data')
#set variables
#Initialize both W and b as tensors full of zeros
with tf.variable_scope('layerWeights'):
h1 = tf.Variable(tf.random_normal([n_features,hidden_1]))
h2 = tf.Variable(tf.random_normal([hidden_1,hidden_2]))
w_out = tf.Variable(tf.random_normal([hidden_2,n_labels]))
with tf.variable_scope('layerBias'):
b1 = tf.Variable(tf.random_normal([hidden_1]))
b2 = tf.Variable(tf.random_normal([hidden_2]))
b_out = tf.Variable(tf.random_normal([n_labels]))
#create model
def neural_net(x):
layer_1 = tf.add(tf.matmul(x,h1),b1)
layer_2 = tf.nn.relu(tf.add(tf.matmul(layer_1,h2),b2))
out_layer = tf.add(tf.matmul(layer_2,w_out),b_out)
return out_layer
nn_out = neural_net(X)
#loss and optimizer
with tf.variable_scope('Loss'):
loss = tf.reduce_mean(tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(logits = nn_out,labels = Y)))
with tf.name_scope('Train'):
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)
with tf.name_scope('Accuracy'):
correct_prediction = tf.equal(tf.argmax(nn_out,1),tf.argmax(Y,1))
#correct_prediction = tf.metrics.accuracy (labels = Y, predictions =nn_out)
acc = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
# Initialize
init = tf.global_variables_initializer()
# start computing & training
with tf.Session() as sess:
sess.run(init)
for step in range(training_steps):
#set batch
cmt_train_bat,label_train_bat = sess.run(tf.train.shuffle_batch([cDMT_training_list,label_training_list],batch_size = BATCH_SIZE,capacity=50000,min_after_dequeue=10000))
cmt_test_bat,label_test_bat = sess.run(tf.train.shuffle_batch([cDMT_test_list,label_test_list],batch_size = BATCH_SIZE,capacity=50000,min_after_dequeue=10000))
From the Session.run doc:
The optional feed_dict argument allows the caller to override the
value of tensors in the graph. Each key in feed_dict can be one of the
following types:
If the key is a tf.Tensor, the value may be a Python scalar, string,
list, or numpy ndarray that can be converted to the same dtype as that
tensor. Additionally, if the key is a tf.placeholder, the shape of the
value will be checked for compatibility with the placeholder.
...
So you are right: for X and Y (which are placeholders) you can't feed a tensor and tf.train.shuffle_batch is not designed to work with placeholders.
You can follow one of two ways:
get rid of placeholders and use tf.TFRecordReader in combination with tf.train.shuffle_batch, as suggested here. This way you'll have only tensors in your model and you won't need to "feed" anything additionally.
batch and shuffle the data yourself in numpy and feed into placeholders. This takes just several lines of code, so I find it easier, though both paths are valid.
Take also into account performance considerations.

Feed a trainable variable into a placeholder in TensorFlow

Suppose I want to train a model over a number of samples (and also variables) that are known only at run time.
Case study: PCA (X W = Y)
(this is only a simplification of a much more complex model)
Take for example this simple PCA model, where only the features dimensions (Din and Dout) are known and fixed.
W = tf.Variable(tf.zeros([D_in, D_out]), name='weights', trainable=True)
X = tf.placeholder(tf.float32, [None, D_in], name='placeholder_latent')
Y_est = tf.matmul(X, W)
loss = tf.reduce_sum((Y_tf-Y_est)**2)
train_step = tf.train.AdamOptimizer(0.001).minimize(loss)
Suppose now we generate some data
W_true = np.random.randn(D_in, D_out)
X_true = np.random.randn(N, D_in)
Y_true = np.dot(X_true, W_true)
Y_tf = tf.constant(Y_true.astype(np.float32))
As soon as I know the dimension of my training data, I can declare the latent variable that will be fed to the placeholder X to be optimised.
latent = tf.Variable(tf.zeros([N, D_in]), name='latent', trainable=True)
init_op = tf.global_variables_initializer()
After that, what I would like to do is to feed this latent variable to the placeholder X and run the optimisation.
with tf.Session() as sess:
sess.run(init_op)
for n in range(10000):
sess.run(train_step, feed_dict={X : sess.run(latent)})
if (n+1) % 1000 == 0:
print('iter %i, %f' % (n+1, sess.run(loss, feed_dict={X : sess.run(latent)})))
The problem is that the optimiser does not optimise W and latent, but only W. I have tried also to feed the variable directly without evaluation but I get this error:
ValueError: setting an array element with a sequence.
Have you ever encountered this kind of issue? Do you know how to overcome this problem? Are there any possible workaround to optimise on a placeholder?
By the way, I am using TensorFlow 1.1.0rc0 with Python 2.7.13

tensorflow InvalidArgumentError: You must feed a value for placeholder tensor with dtype float

I am new to tensorflow and want to train a logistic model for classification.
# Set model weights
W = tf.Variable(tf.zeros([30, 16]))
b = tf.Variable(tf.zeros([16]))
train_X, train_Y, X, Y = input('train.csv')
#construct model
pred = model(X, W, b)
# Minimize error using cross entropy
cost = tf.reduce_mean(-tf.reduce_sum(Y*tf.log(pred), reduction_indices=1))
# Gradient Descent
learning_rate = 0.1
#optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
# Initializing the variables
init = tf.initialize_all_variables()
get_ipython().magic(u'matplotlib inline')
import collections
import matplotlib.pyplot as plt
training_epochs = 200
batch_size = 300
train_X, train_Y, X, Y = input('train.csv')
acc = []
x = tf.placeholder(tf.float32, [None, 30])
y = tf.placeholder(tf.float32, [None, 16])
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.0
#print(type(y_train[0][0]))
print(type(train_X))
print(type(train_X[0][0]))
print X
_, c = sess.run([optimizer, cost], feed_dict = {x: train_X, y: train_Y})
The feef_dict method does not work, with the complain:
InvalidArgumentError: You must feed a value for placeholder tensor
'Placeholder_54' with dtype float [[Node: Placeholder_54 =
Placeholderdtype=DT_FLOAT, shape=[],
_device="/job:localhost/replica:0/task:0/cpu:0"]] Caused by op u'Placeholder_54':
I check the data type, for the training feature data X:
train_X type: <type 'numpy.ndarray'>
train_X[0][0]: <type 'numpy.float32'>
train_X size: (300, 30)
place_holder info : Tensor("Placeholder_56:0", shape=(?, 30), dtype=float32)
I do not know why it complains. Hope sb could help, thanks
From your error message, the name of the missing placeholder—'Placeholder_54'—is suspicious, because that suggests that at least 54 placeholders have been created in the current interpreter session.
There aren't enough details to say for sure, but I have some suspicions. Are you running the same code multiple times in the same interpreter session (e.g. using IPython/Jupyter or the Python shell)? Assuming that is the case, I suspect that your cost tensor depends on placeholders that were created in a previous execution of that code.
Indeed, your code creates two tf.placeholder() tensors x and y after building the rest of the model, so it seems likely that either:
The missing placeholder was created in a previous execution of this code, or
The input() function calls tf.placeholder() internally and it is these placeholders (perhaps the tensors X and Y?) that you should be feeding.
I think I came to a similar error. It seems your graph does not have those tensor's x, y on it, you created placeholders with the same names, but that does not mean you got tensor's in your graph with those names.
Here is the link to my question (which I answered my self..): link
Use this for getting all the tensors in your graph (pretty useful):
[n.name for n in tf.get_default_graph().as_graph_def().node]
Show the code for model() - I bet it defines two placeholders: X is placeholder_56, so where is placeholder_54 coming from?
Then pass the model x,y into the feed_dict, delete your x, y global placeholders, all will work :)
Maybe you can try add graph with tf.Graph().as_default(): to avoid Placeholder redefine when you run jupyter notebook or ipython.

Categories

Resources