I am training a CNN and I believe my use of sess.run() is causing my training to be very slow.
In essence, I am use the mnist data set...
from tensorflow.examples.tutorials.mnist import input_data
...
...
features = input_data.read_data_sets("/tmp/data/", one_hot=True)
The issue is, the first layer of the CNN must accept the images in the form of [batch_size, 28, 28, 1], which means I must convert each image before feeding it to the CNN.
I do the following with my script...
x = tf.placeholder(tf.float32, [None, 28, 28, 1])
y = tf.placeholder(tf.float32, [None, 10])
...
...
with tf.Session() as sess:
for epoch in range(25):
total_batch = int(features.train.num_examples/500)
avg_cost = 0
for i in range(total_batch):
batch_xs, batch_ys = features.train.next_batch(10)
# Notice this line.
_, c = sess.run([train_op, loss], feed_dict={x:sess.run(tf.reshape(batch_xs, [10, 28, 28, 1])), y:batch_ys})
avg_cost += c / total_batch
if (epoch + 1) % 1 == 0:
print("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(avg_cost))
Notice the commented line. I am taking the first batch from the training set and I am reshaping to the proper format [batch_size, 28, 28, 1]. I have to call sess.run() every single time and I believe this is the cause for the training to be so slow.
How can I prevent this. I tried reformatting the data in another script using numpy, but it still gave me issues because I cannot feed the numpy array without running sess.run(). Can someone show me how to format the data outside of the training session? Maybe I can format the data in another script and load it into the one containing my CNN?
You should definitely not have the inner sess.run() on new ops at each iteration (though I'm not sure how much it really slows you down). You should do one of these:
have a placeholder of the same shape as your input, e.g. [None, 28*28*1], followed by a tf.reshape([None, 28, 28, 1]), at the beginning of your network (instead of your tf.placeholder([None, 28, 28, 1]))
OR
Keep your neural network, and reformat using numpy reshape instead of tensorflow: _, c = sess.run([train_op, loss], feed_dict={x:batch_xs.reshape( [-1, 28, 28, 1]), y:batch_ys})
It probably also works if you just write _, c = sess.run([train_op, loss], feed_dict={x:tf.reshape(batch_xs, [10, 28, 28, 1]), y:batch_ys}) but you should not do that, since it creates a new op in your graph at each iteration.
One other thing you can do is reshape all the inputs in the beginning itself and then feed it to the placeholder.
import math
import numpy as np
x = tf.placeholder(tf.float32, [None, 28, 28, 1])
y = tf.placeholder(tf.float32, [None, 10])
...
...
with tf.Session() as sess:
X_train=mnist.train.images.reshape(-1,28,28,1)
y_train=mnist.train.labels
train_indicies = np.arange(X_train.shape[0])
num_epochs = 25 // number of epochs
batch_size = 50
total_batch = int(math.ceil(X_train.shape[0]/batch_size))
for epoch in range(25):
for i in np.arange(total_batch):
start_idx = (i*batch_size)%X_train.shape[0]
idx = train_indicies[start_idx:start_idx+batch_size]
_, c = sess.run([train_op, loss], feed_dict={x:X_train[idx,:], y:y_train[idx]})
avg_cost += c / total_batch
if (epoch + 1) % 1 == 0:
print("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(avg_cost))
since we will not be able to use mnist.train.next_batch, we will need to manually calculate and increment the indices.
Hope this works :)
Related
I'm trying to get started with TensorFlow in python, building a simple CNN with batch normalization. But when i create a new graph to run, exception happens to BN.
My key codes is as follows
**# exception here**
def batch_norm(x, beta, gamma, phase_train, scope='bn', decay=0.9, eps=1e-5):
with tf.variable_scope(scope):
batch_mean, batch_var = tf.nn.moments(x, [0], name='moments')
ema = tf.train.ExponentialMovingAverage(decay=decay)
def mean_var_with_update():
ema_apply_op = ema.apply([batch_mean, batch_var])
with tf.control_dependencies([ema_apply_op]):
return tf.identity(batch_mean), tf.identity(batch_var)
mean, var = tf.cond(phase_train, mean_var_with_update, lambda: (ema.average(batch_mean), ema.average(batch_var)))
normed = tf.nn.batch_normalization(x, mean, var, beta, gamma, eps)
return normed
training code:
# start training
output = conv2d_net()
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=output, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=0.002).minimize(loss)
predict = tf.reshape(output, [-1, MAX_CAPTCHA, CHAR_SET_LEN])
max_idx_p = tf.argmax(predict, 2)
max_idx_l = tf.argmax(tf.reshape(Y, [-1, MAX_CAPTCHA, CHAR_SET_LEN]), 2)
correct_pred = tf.equal(max_idx_p, max_idx_l)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
step = 0
while True:
batch_x, batch_y = get_next_batch(64)
_, loss_ = sess.run([optimizer, loss],
feed_dict={X: batch_x, Y: batch_y, keep_prob: 0.75, train_phase: True})
print(step, loss_)
if step % 10 == 0 and step != 0:
batch_x_test, batch_y_test = get_next_batch(100)
acc = sess.run(accuracy,
feed_dict={X: batch_x_test, Y: batch_y_test, keep_prob: 1., train_phase: False})
print("step %s,accuracy:%s" % (step, acc))
if acc > 0.05:
# stop training and save parameters in layer
result_weights['wc1'] = weights['wc1'].eval(sess)
...
break
step += 1
Create new graph for exporting:
EXPORT_DIR = './model'
if os.path.exists(EXPORT_DIR):
shutil.rmtree(EXPORT_DIR)
g = tf.Graph()
with g.as_default():
x_2 = tf.placeholder(tf.float32, shape=[None, IMAGE_HEIGHT * IMAGE_WIDTH], name="input")
x_image = tf.reshape(x_2, shape=[-1, IMAGE_HEIGHT, IMAGE_WIDTH, 1])
# fill trained parameters and create new cnn layers
WC1 = tf.constant(result_weights['wc1'], name="WC1")
...
**# crash here!!!**
CONV1 = conv2d(WC1, BC1, x_image, tf.constant(0.0, shape=[32]),
tf.random_normal(shape=[32], mean=1.0, stddev=0.02), scope='BN_1')
OUTPUT = tf.add(tf.matmul(FULL1, W_OUT), B_OUT)
OUTPUT = tf.nn.sigmoid(OUTPUT, name="output")
sess = tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = g.as_graph_def()
tf.train.write_graph(graph_def, EXPORT_DIR, 'phone_model_graph.pb', as_text=True)
I create a new graph at last. The exception means it uses incorrect parameter in old training graph. How to explain it?
Thank you very much!
Log is:
I call batch_norm in fuction conv2d. It seems no tensor passed to the new graph.
def conv2d(w, b, x, tf_constant, tf_random_normal, scope, keep_p=1., phase=tf.constant(False)):
out = tf.nn.bias_add(tf.nn.conv2d(x, w, strides=[1, 1, 1, 1], padding='SAME'), b)
out = batch_norm(out, tf_constant, tf_random_normal, phase, scope=scope)
out = tf.nn.relu(out)
out = tf.nn.max_pool(out, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
out = tf.nn.dropout(out, keep_p)
return out
I create a new graph at last.
That's the key statement here: upon creation of a new graph one can't use any tensor from the old graph. See a detailed explanation in this question. According to the stacktrace, at least one of the tensors that is passed to the batch_norm is defined before g.as_default(), that's why tensorflow crashes. From your code snippets it's unclear how exactly the batch_norm is called, so I can't say which one.
You can check this hypothesis by printing x.graph and g and checking if these values are different. In order to avoid this problem you can either do all the work inside one graph (which is a recommended way) or define both graphs in different python scopes thus making impossible to accidentally reuse the same python variable in two graphs.
I'm completely new to Python and Tensorflow. I want to implement a simple kind of CNN, and this is what I have done till now:
import tensorflow as tf
import numpy as np
from libs import utils
import cv2
import glob
from tensorflow.python.framework.ops import reset_default_graph
reset_default_graph()
# We first get the graph that we used to compute the network
g = tf.get_default_graph()
# And can inspect everything inside of it
[op.name for op in g.get_operations()]
X = tf.placeholder(tf.float32, [None,720000])
Y = tf.placeholder(tf.int32, [None])
X_data = []
files = glob.glob ("C:/Users/Maede/Desktop/Master Thesis/imlearning/*.jpg")
for myFile in files:
print(myFile)
image = cv2.imread (myFile)
X_data.append (image)
print('X_data shape:', np.array(X_data).shape)
data=np.array(X_data)
data=np.reshape(data,(30,720000))
label=np.array([(0,1),(1,0),(0,1),(1,0),(0,1),(1,0),(0,1),(1,0),(0,1),(1,0),
(0,1),(1,0),(0,1),(1,0),(0,1),(1,0),(0,1),(1,0),(0,1),(1,0),
(0,1),(1,0),(0,1),(1,0),(0,1),(1,0),(0,1),(1,0),(0,1),(1,0)])
###########################################################
train_batch_size = 2
def random_batch():
num_images = 30
idx = np.random.choice(num_images,
size=train_batch_size,
replace=False)
x_batch = data[idx,:]
y_batch = label[idx, :]
return x_batch, y_batch
######################
#
X_tensor = tf.reshape(X, [-1, 400,600,3])
filter_size = 5
n_filters_in = 3
n_filters_out = 32
W_1 = tf.get_variable(
name='W',
shape=[filter_size, filter_size, n_filters_in, n_filters_out],
initializer=tf.random_normal_initializer())
b_1 = tf.get_variable(
name='b',
shape=[n_filters_out],
initializer=tf.constant_initializer())
h_1 = tf.nn.relu(
tf.nn.bias_add(
tf.nn.conv2d(input=X_tensor,
filter=W_1,
strides=[1, 2, 2, 1],
padding='SAME'),
b_1))
n_filters_in = 32
n_filters_out = 64
n_output = 2
W_2 = tf.get_variable(
name='W2',
shape=[filter_size, filter_size, n_filters_in, n_filters_out],
initializer=tf.random_normal_initializer())
b_2 = tf.get_variable(
name='b2',
shape=[n_filters_out],
initializer=tf.constant_initializer())
h_2 = tf.nn.relu(
tf.nn.bias_add(
tf.nn.conv2d(input=h_1,
filter=W_2,
strides=[1, 2, 2, 1],
padding='SAME'),
b_2))
# We'll now reshape so we can connect to a fully-connected/linear layer:
h_2_flat = tf.reshape(h_2, [-1, 100*150* n_filters_out])
# NOTE: This uses a slightly different version of the linear function than the lecture!
h_3, W = utils.linear(h_2_flat, 400, activation=tf.nn.relu, name='fc_1')
# NOTE: This uses a slightly different version of the linear function than the lecture!
Y_pred, W = utils.linear(h_3, n_output, activation=tf.nn.softmax, name='fc_2')
y_one_hot = tf.one_hot( Y , 2 )
cross_entropy = -tf.reduce_sum(y_one_hot * tf.log(Y_pred + 1e-12))
optimizer = tf.train.AdamOptimizer().minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(Y_pred, 1), tf.argmax(y_one_hot, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
sess = tf.Session()
sess.run(tf.global_variables_initializer())
batch_size = 2
n_epochs = 5
for epoch_i in range(n_epochs):
for batch_xs, batch_ys in random_batch():
sess.run(optimizer, feed_dict={
X: np.array(batch_xs).reshape([1,720000]),
Y: batch_ys
})
valid = data #### DATA haie validation
print(sess.run(accuracy,
feed_dict={
X: data,
Y: label
}))
The input is 30 image with 400*600*3 dimension and i want to classify them into two class. The problem is when i'm using this command:
X: np.array(batch_xs).reshape([1,720000]),
The error is like below:
ValueError: cannot reshape array of size 2 into shape (1,720000)
and when I'm using:
X: batch_xs
The error is:
ValueError: Cannot feed value of shape (720000,) for Tensor 'Placeholder:0', which has shape '(?, 720000)'
I'm totally confused what is batch_xs dimension and why does it change in different situation.
np.array(batch_xs) is not the same size as your image.
for batch_xs, batch_ys in random_batch() is also a slightly strange way to run the code and I guess this also causes your problem. You usually use for to iterate over a some iterable.
In your case the iterable is just what your function returns, a tuple with batch_xs, batch_ys. But in the same step you are unpacking the first (!) value of the tuple into two variables batch_xs and batch_ys.
The replace=False does not do anything in your case, because you are calling the function random_batch() only once. In the next iteration it will have the complete dataset again.
Here is an simple example on your case:
import numpy as np
# I removed a dimension from the arrays
data = np.array([[1.0, 1.0, 1.0],
[2.0, 2.0, 2.0],
[3.0, 3.0, 3.0]])
label = np.array([[10.0, 10.0, 10.0],
[20.0, 20.0, 20.0],
[30.0, 30.0, 30.0]])
def random_batch():
idx = np.random.choice(3, size=2)
x_batch = data[idx,:]
y_batch = label[idx, :]
return x_batch, y_batch
# the outer variable names x_batch and y_batch are not related at all to the ones
# inside random_batch()
# iterate over whatever random_batch() returns
# for x_batch, y_batch in random_batch() is equivalent to
# for (x_batch, y_batch) in random_batch()
# in the first iteration the iterable is `x_batch`, in the second one`y_batch`.
# and each of the iterable is "unpacked", basically in the first iteration
# your are assigning
# (x_batch, y_batch) = x_batch
# in the second iteration
# (x_batch, y_batch) = y_batch
# When unpacking you are splitting the two elements created by `size=2`
# in `random_batch()`
for (x_batch, y_batch) in random_batch():
print(x_batch)
print(y_batch)
This is fundamental Python basics, to get familiar with it look for tuple unpacking, iterable and for loops.
Replace the inner for-loop with this, it should work. It might not be what you expected, but it is what you code is supposed to do.
batch_xs, batch_ys = random_batch()
sess.run(optimizer, feed_dict={
X: np.array(batch_xs).reshape([1,720000]),
Y: batch_ys
})
If if you want to train with 100 batches do something like this
for k in range(100):
batch_xs, batch_ys = random_batch()
sess.run(optimizer, feed_dict={
X: np.array(batch_xs).reshape([1,720000]),
Y: batch_ys
})
Usually you try to remove as much of the code that is not related to the problem to make it easier to find the problem. Find as less code as possible that still shows your problem.
Your problem is not related to tensorflow, so you could remove everything related to tensorflow to make it easier to find. Your problem is related to numpy and array shapes.
I am using LSTM RNN to detect whether a heart beat is arrhythmic or not. So the output classes are:[0,1] and n_classes=2, but when this code is executed:
# Fit training using batch data
_, loss, acc = sess.run(
[optimizer, cost, accuracy],
feed_dict={
x: batch_xs,
y: batch_ys
}
)
It gives following error
ValueError: Cannot feed value of shape (1, 1) for Tensor 'Placeholder_1:0', which has shape '(?, 2)'
Here is the whole code:
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import tensorflow as tf # Version 1.0.0 (some previous versions are used in past commits)
from sklearn import metrics
import _pickle as cPickle
import os
import pandas as pd
import functions as f
[ml2_train_input,ml2_train_output,ml2_train_peaks,ml2_test_input,ml2_test_output,ml2_test_peaks]=f.get_ml2(0.5)
ml2_train_output=f.get_binary_output(ml2_train_output[:52500])
ml2_test_output=f.get_binary_output(ml2_test_output[:52500])
# Output classes to learn how to classify
LABELS = [0,1 ]
training_data_count = len(ml2_train_input[:52500]) # training series
test_data_count = len(ml2_test_input[:52500]) # testing series
n_input = 360 # 360 input parameters per timestep
# LSTM Neural Network's internal structure
n_hidden = 8 # Hidden layer num of features
n_classes = 2 # Total classes
# Training
learning_rate = 0.005
lambda_loss_amount = 0.0015
training_iters = training_data_count * 10 # Loop 10 times on the dataset
batch_size = 500
display_iter = 1000 # To show test set accuracy during training
X_test=np.array(ml2_test_input[:52500])
y_test=np.array(ml2_test_output[:52500])
# Some debugging info
print("Some useful info to get an insight on dataset's shape and normalisation:")
print("(X shape, y shape, every X's mean, every X's standard deviation)")
print(X_test.shape, y_test.shape, np.mean(X_test), np.std(X_test))
print("The dataset is therefore properly normalised, as expected, but not yet one-hot encoded.")
def LSTM_RNN(_X, _weights, _biases):
# Function returns a tensorflow LSTM (RNN) artificial neural network from given parameters.
# Moreover, two LSTM cells are stacked which adds deepness to the neural network.
# Note, some code of this notebook is inspired from an slightly different
# RNN architecture used on another dataset, some of the credits goes to
# "aymericdamien" under the MIT license.
# (NOTE: This step could be greatly optimised by shaping the dataset once
# input shape: (batch_size, n_steps, n_input)
# permute n_steps and batch_size
# Reshape to prepare input to hidden activation
#_X = tf.reshape(_X, [-1, n_input])
# new shape: (n_steps*batch_size, n_input)
# Linear activation
_X = tf.nn.relu(tf.matmul(_X, _weights['hidden']) + _biases['hidden'])
# Split data because rnn cell needs a list of inputs for the RNN inner loop
_X = tf.split(_X, 500,0)
# new shape: n_steps * (batch_size, n_hidden)
# Define two stacked LSTM cells (two recurrent layers deep) with tensorflow
lstm_cell_1 = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True,reuse=None)
lstm_cell_2 = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True,reuse=None)
lstm_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell_1, lstm_cell_2], state_is_tuple=True)
# Get LSTM cell output
outputs, states = tf.contrib.rnn.static_rnn(lstm_cells, _X, dtype=tf.float32)
# Get last time step's output feature for a "many to one" style classifier,
# as in the image describing RNNs at the top of this page
lstm_last_output = outputs[-1]
# Linear activation
return tf.matmul(lstm_last_output, _weights['out']) + _biases['out']
def extract_batch_size(_train, step, batch_size):
# Function to fetch a "batch_size" amount of data from "(X|y)_train" data.
shape = list(_train.shape)
shape[0] = batch_size
batch_s = np.empty(shape)
for i in range(batch_size):
# Loop index
index = ((step-1)*batch_size + i) % len(_train)
batch_s[i] = _train[index]
return batch_s
def one_hot(y_):
# Function to encode output labels from number indexes
# e.g.: [[5], [0], [3]] --> [[0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0]]
y_ = y_.reshape(len(y_))
n_values = int(np.max(y_)) + 1
return np.eye(n_values)[np.array(y_, dtype=np.int32)] # Returns FLOATS
# Graph input/output
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
# Graph weights
weights = {
'hidden': tf.Variable(tf.random_normal([n_input, n_hidden])), # Hidden layer weights
'out': tf.Variable(tf.random_normal([n_hidden, n_classes], mean=1.0))
}
biases = {
'hidden': tf.Variable(tf.random_normal([n_hidden])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
pred = LSTM_RNN(x, weights, biases)
# Loss, optimizer and evaluation
l2 = lambda_loss_amount * sum(
tf.nn.l2_loss(tf_var) for tf_var in tf.trainable_variables()
) # L2 loss prevents this overkill neural network to overfit the data
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=pred)) + l2 # Softmax loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Adam Optimizer
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# To keep track of training's performance
test_losses = []
test_accuracies = []
train_losses = []
train_accuracies = []
# Launch the graph
sess = tf.InteractiveSession(config=tf.ConfigProto(log_device_placement=True))
init = tf.global_variables_initializer()
sess.run(init)
X_train=np.array(ml2_train_input[:52500])
y_train=np.array(ml2_train_output[:52500])
step = 1
while step * batch_size <= training_iters:
batch_xs = extract_batch_size(X_train, step, batch_size)
batch_ys = one_hot(extract_batch_size(y_train, step, batch_size))
# Fit training using batch data
_, loss, acc = sess.run(
[optimizer, cost, accuracy],
feed_dict={
x: batch_xs,
y: batch_ys
}
)
train_losses.append(loss)
train_accuracies.append(acc)
# Evaluate network only at some steps for faster training:
if (step*batch_size % display_iter == 0) or (step == 1) or (step * batch_size > training_iters):
# To not spam console, show training accuracy/loss in this "if"
print("Training iter #" + str(step*batch_size) + \
": Batch Loss = " + "{:.6f}".format(loss) + \
", Accuracy = {}".format(acc))
# Evaluation on the test set (no learning made here - just evaluation for diagnosis)
loss, acc = sess.run(
[cost, accuracy],
feed_dict={
x: X_test,
y: one_hot(y_test)
}
)
test_losses.append(loss)
test_accuracies.append(acc)
print("PERFORMANCE ON TEST SET: " + \
"Batch Loss = {}".format(loss) + \
", Accuracy = {}".format(acc))
step += 1
print("Optimization Finished!")
Please help!
I feel you should convert your Y values to categorical (one-hot encoded) than it should work. So try to convert your Y values to categorical
There are literally thousands of these posts but I haven't seen one yet that addresses my exact problem. Please feel free to close this if one exists.
I understand that lists are mutable in Python. As a result, we cannot store a list as a key in a dictionary.
I have the following code (a ton of it is left out because it is irrelevant):
with tf.Session() as sess:
sess.run(init)
step = 1
while step * batch_size < training_iterations:
for batch_x, batch_y in batch(train_x, train_y, batch_size):
batch_x = np.reshape(batch_x, (batch_x.shape[0],
1,
batch_x.shape[1]))
batch_x.astype(np.float32)
batch_y = np.reshape(batch_y, (batch_y.shape[0], 1))
batch_y.astype(np.float32)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
if step % display_step == 0:
# Calculate batch accuracy
acc = sess.run(accuracy,
feed_dict={x: batch_x, y: batch_y})
# Calculate batch loss
loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
print("Iter " + str(step*batch_size) +
", Minibatch Loss= " +
"{:.6f}".format(loss) + ", Training Accuracy= " +
"{:.5f}".format(acc))
step += 1
print("Optimization Finished!")
train_x is a [batch_size, num_features] numpy matrix
train_y is a [batch_size, num_results] numpy matrix
I have the following placeholders in my graph:
x = tf.placeholder(tf.float32, shape=(None, num_steps, num_input))
y = tf.placeholder(tf.float32, shape=(None, num_res))
So naturally I need to transform my train_x and train_y to get to the correct format tensorflow expects.
I do that with the following:
batch_x = np.reshape(batch_x, (batch_x.shape[0],
1,
batch_x.shape[1]))
batch_y = np.reshape(batch_y, (batch_y.shape[0], 1))
This result gives me two numpy.ndarray:
batch_x is of dimensions [batch_size, timesteps, features]
batch_y is of dimensions [batch_size, num_results]
As expected by our graph.
Now when I pass these reshaped numpy.ndarray I get TypeError: Unhashable type list on the following line:
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
This seems strange to me because firing up python:
import numpy as np
a = np.zeros((10,3,4))
{a : 'test'}
TypeError: unhashable type: 'numpy.ndarray`
You can see I get an entirely different error message.
Further in my code I perform a series of transformations on the data:
x = tf.transpose(x, [1, 0, 2])
x = tf.reshape(x, [-1, num_input])
x = tf.split(0, num_steps, x)
lstm_cell = rnn_cell.BasicLSTMCell(num_hidden, forget_bias=forget_bias)
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
And the only place a list occurs is after slicing, which results in a T sized list of tensors that rnn.rnn expects.
I am at a complete loss here. I feel like I'm staring right at the solution and I can't see it. Can anyone help me out here?
Thank you!
I feel kind of silly here but I am sure someone else will have this problem.
The line above where the tf.split results in a list is the problem.
I did not split these into separate functions, and modified x directly (as shown in my code) and never changed the names. So when the code ran in sess.run, x was no longer a tensor placeholder as expected but rather a list of tensors after transformation in the graph.
Renaming each transformation of x solved the problem.
I hope this helps someone.
This error also occurs if x and y in feed_dict={x: batch_x, y: batch_y} are for some reason lists. In my case I misspelled them as X and Y and these were lists in my code.
I accidentally set the variable x as a python list in the code.
Why it threw this error? because of _, loss = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y}), batch_x or batch_y one of either is a list or tuple. They must be a tensor, so print the two variables types out for looking what's wrong with the code.
I have a simple program, mostly copied from the MNIST tutorial on Tensorflow. I have a 2D array 118 items long, with each subarray being 13 long. And a 2nd 2D array that is 118 long with a single integer in each sub array, containing either 1, 2, or 3 (the matching class of the first array's item)
Whenever I run it however, I get various dimension errors.
either ValueError: Dimensions X and X are not compatible
or ValueError: Incompatible shapes for broadcasting: (?, 13) and (3,)
or something along those lines. I've tried most every combination of numbers I can think of here in the various places to get it to align properly but am unable to get it to.
x = tf.placeholder(tf.float32, [None, 13])
W = tf.Variable(tf.zeros([118, 13]))
b = tf.Variable(tf.zeros([3]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder(tf.float32, [None, 13])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(1000):
batch_xs = npWineList
batch_ys = npWineClass
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
First, it's not clear how many labels you have (3 or 13), and what is the size of input (X) vector (113 or 13)? I assume you have 13 labels, and 118 X vectors based on:
W = tf.Variable(tf.zeros([118, 13]))
y_ = tf.placeholder(tf.float32, [None, 13])
Then, you may change your code something like this:
x = tf.placeholder(tf.float32, [None, 118])
W = tf.Variable(tf.zeros([118, 13]))
b = tf.Variable(tf.zeros([13]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder(tf.float32, [None, 13])
Let me know it addresses your issue.