Related
I have written the following code in my Pycharm which does Fully Connect Layer (FCL) in Tensorflow. The placeholder happens invalid argument error. So I entered all the dtype, shape, and name in the placeholder, but I still get invalid argument error.
I want to make new Signal(1, 222) through FCL model.
input Signal(1, 222) => output Signal(1, 222)
maxPredict: Find the index with the highest value in the output signal.
calculate Y: Get the frequency array value corresponding to maxPredict.
loss: Use the difference between true Y and calculate Y as a loss.
loss = tf.abs(trueY - calculateY)`
Code (occur Error)
x = tf.placeholder(dtype=tf.float32, shape=[1, 222], name='inputX')
ERROR
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'inputX' with dtype float and shape [1,222]
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'inputX' with dtype float and shape [1,222]
[[{{node inputX}} = Placeholderdtype=DT_FLOAT, shape=[1,222], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
During handling of the above exception, another exception occurred:
New Error Case
I changed my Code.
x = tf.placeholder(tf.float32, [None, 222], name='inputX')
Error Case 1
tensorFreq = tf.convert_to_tensor(basicFreq, tf.float32)
newY = tf.gather(tensorFreq, maxPredict) * 60
loss = tf.abs(y - tf.Variable(newY))
ValueError: initial_value must have a shape specified: Tensor("mul:0", shape=(?,), dtype=float32)
Error Case 2
tensorFreq = tf.convert_to_tensor(basicFreq, tf.float32)
newY = tf.gather(tensorFreq, maxPredict) * 60
loss = tf.abs(y - newY)
Traceback (most recent call last):
File "D:/PycharmProject/DetectionSignal/TEST_FCL_StackOverflow.py", line 127, in
trainStep = opt.minimize(loss)
File "C:\Users\Heewony\Anaconda3\envs\TSFW_pycharm\lib\site-packages\tensorflow\python\training\optimizer.py", line 407, in minimize
([str(v) for _, v in grads_and_vars], loss))
ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables [tf.Variable 'Variable:0' shape=(222, 1024) dtype=float32_ref, tf.Variable 'Variable_1:0' shape=(1024,) dtype=float32_re, ......... tf.Variable 'Variable_5:0' shape=(222,) dtype=float32_ref] and loss Tensor("Abs:0", dtype=float32).
Development environment
OS Platform and Distribution: Windows 10 x64
TensorFlow installed from: Anaconda
Tensorflow version 1.12.0:
python 3.6.7 :
Mobile device: N/A
Exact command to reproduce: N/A
GPU model and memory: NVIDIA GeForce CTX 1080 Ti
CUDA/cuDNN: 9.0/7.4
Model and Function
def Model_FCL(inputX):
data = inputX # input Signals
# Fully Connected Layer 1
flatConvh1 = tf.reshape(data, [-1, 222])
fcW1 = tf.Variable(tf.truncated_normal(shape=[222, 1024], stddev=0.05))
fcb1 = tf.Variable(tf.constant(0.1, shape=[1024]))
fch1 = tf.nn.relu(tf.matmul(flatConvh1, fcW1) + fcb1)
# Fully Connected Layer 2
flatConvh2 = tf.reshape(fch1, [-1, 1024])
fcW2 = tf.Variable(tf.truncated_normal(shape=[1024, 1024], stddev=0.05))
fcb2 = tf.Variable(tf.constant(0.1, shape=[1024]))
fch2 = tf.nn.relu(tf.matmul(flatConvh2, fcW2) + fcb2)
# Output Layer
fcW3 = tf.Variable(tf.truncated_normal(shape=[1024, 222], stddev=0.05))
fcb3 = tf.Variable(tf.constant(0.1, shape=[222]))
logits = tf.add(tf.matmul(fch2, fcW3), fcb3)
predictY = tf.nn.softmax(logits)
return predictY, logits
def loadMatlabData(fileName):
contentsMat = sio.loadmat(fileName)
dataInput = contentsMat['dataInput']
dataLabel = contentsMat['dataLabel']
dataSize = dataInput.shape
dataSize = dataSize[0]
return dataInput, dataLabel, dataSize
def getNextSignal(num, data, labels, WINDOW_SIZE, OUTPUT_SIZE):
shuffleSignal = data[num]
shuffleLabels = labels[num]
# shuffleSignal = shuffleSignal.reshape(1, WINDOW_SIZE)
# shuffleSignal = np.asarray(shuffleSignal, np.float32)
return shuffleSignal, shuffleLabels
def getBasicFrequency():
# basicFreq => shape(222)
basicFreq = np.array([0.598436736688, 0.610649731314, ... 3.297508549096])
return basicFreq
Graph
basicFreq = getBasicFrequency()
myGraph = tf.Graph()
with myGraph.as_default():
# define input data & output data 입력받기 위한 placeholder
x = tf.placeholder(dtype=tf.float32, shape=[1, 222], name='inputX') # Signal size = [1, 222]
y = tf.placeholder(tf.float32, name='trueY') # Float value size = [1]
print('inputzz ', x, y)
print('Graph ', myGraph.get_operations())
print('TrainVariable ', tf.trainable_variables())
predictY, logits = Model_FCL(x) # Predict Signal, size = [1, 222]
maxPredict = tf.argmax(predictY, 1, name='maxPredict') # Find max index of Predict Signal
tensorFreq = tf.convert_to_tensor(basicFreq, tf.float32)
newY = tf.gather(tensorFreq, maxPredict) * 60 # Find the value that corresponds to the Freq array index
loss = tf.abs(y - tf.Variable(newY)) # Calculate absolute (true Y - predict Y)
opt = tf.train.AdamOptimizer(learning_rate=0.0001)
trainStep = opt.minimize(loss)
print('Graph ', myGraph.get_operations())
print('TrainVariable ', tf.trainable_variables())
Session
with tf.Session(graph=myGraph) as sess:
sess.run(tf.global_variables_initializer())
dataFolder = './'
writer = tf.summary.FileWriter('./logMyGraph', sess.graph)
startTime = datetime.datetime.now()
numberSummary = 0
accuracyTotalTrain = []
for trainEpoch in range(1, 25 + 1):
arrayTrain = []
dataPPG, dataLabel, dataSize = loadMatlabData(dataFolder + "TestValues.mat")
for i in range(dataSize):
batchSignal, valueTrue = getNextSignal(i, dataPPG, dataLabel, 222, 222)
_, lossPrint, valuePredict = sess.run([trainStep, loss, newY], feed_dict={x: batchSignal, y: valueTrue})
print('Train ', i, ' ', valueTrue, ' - ', valuePredict, ' Loss ', lossPrint)
arrayTrain.append(lossPrint)
writer.add_summary(tf.Summary(value=[tf.Summary.Value(tag='Loss', simple_value=float(lossPrint))]),
numberSummary)
numberSummary += 1
accuracyTotalTrain.append(np.mean(arrayTrain))
print('Final Train : ', accuracyTotalTrain)
sess.close()
It seems that the variable batchSignal is of a wrong type or shape. It must be a numpy array of shape exactly [1, 222]. If you want to use a batch of examples of size n × 222, the placeholder x should have a shape of [None, 222] and placeholder y shape [None].
By the way, consider using tf.layers.dense instead of explicitly initializing variables and implementing the layers yourself.
There should be two things to change.
Error Case 0. You don't need to reshape your flow between layers. You can use None at the first dimension to pass a dynamic batch size.
Error Case 1. You can use directly your newY as output of the NN. You only use tf.Variable to define weights or bias.
Error Case 2. And it seems that tensorflow doesn't have gradient descent implementation for neither tf.abs() nor tf.gather(). With a regression problem, the mean square error is often sufficient.
Herein, how I rewrite your code. I don't have your matlab part so I can't debug your python/matlab interface:
Model:
def Model_FCL(inputX):
# Fully Connected Layer 1
fcW1 = tf.get_variable('w1', shape=[222, 1024], initializer=tf.initializer.truncated_normal())
fcb1 = tf.get_variable('b1', shape=[222], initializer=tf.initializer.truncated_normal())
# fcb1 = tf.get_variable('b1', shape=[None, 222], trainable=False, initializer=tf.constant_initializer(valueThatYouWant)) # if you want to fix your bias constant
fch1 = tf.nn.relu(tf.matmul(inputX, fcW1) + fcb1, name='relu1')
# Fully Connected Layer 2
fcW2 = tf.get_variable('w2', shape=[1024, 1024], initializer=tf.initializer.truncated_normal())
fcb2 = tf.get_variable('b2', shape=[222], initializer=tf.initializer.truncated_normal())
# fcb2 = tf.get_variable('b2', shape=[None, 222], trainable=False, initializer=tf.constant_initializer(valueThatYouWant)) # if you want to fix your bias constant
fch2 = tf.nn.relu(tf.matmul(fch1, fcW2) + fcb2, name='relu2')
# Output Layer
fcW3 = tf.get_variable('w3', shape=[1024, 222], initializer=tf.initializer.truncated_normal())
fcb3 = tf.get_variable('b3', shape=[222], initializer=tf.initializer.truncated_normal())
# fcb2 = tf.get_variable('b2', shape=[None, 222], trainable=False, initializer=tf.constant_initializer(valueThatYouWant)) # if you want to fix your bias constant
logits = tf.add(tf.matmul(fch2, fcW3), fcb3)
predictY = tf.nn.softmax(logits) #I'm not sure that it will learn if you do softmax then abs/MSE
return predictY, logits
Graph:
with myGraph.as_default():
# define input data & output data 입력받기 위한 placeholder
# put None(dynamic batch size) not -1 at the first dimension so that you can change your batch size
x = tf.placeholder(tf.float32, shape=[None, 222], name='inputX') # Signal size = [1, 222]
y = tf.placeholder(tf.float32, shape=[None], name='trueY') # Float value size = [1]
...
predictY, logits = Model_FCL(x) # Predict Signal, size = [1, 222]
maxPredict = tf.argmax(predictY, 1, name='maxPredict') # Find max index of Predict Signal
tensorFreq = tf.convert_to_tensor(basicFreq, tf.float32)
newY = tf.gather(tensorFreq, maxPredict) * 60 # Find the value that corresponds to the Freq array index
loss = tf.losses.mean_squared_error(labels=y, predictions=newY) # maybe use MSE for regression problem
# loss = tf.abs(y - newY) # Calculate absolute (true Y - predict Y) #tensorflow doesn't have gradient descent implementation for tf.abs
opt = tf.train.AdamOptimizer(learning_rate=0.0001)
trainStep = opt.minimize(loss)
If you are still getting the same error even after feeding the right numpy shape and also maintaining the correct dtypes (np.int32 or np.float32) as suggested by the error message, then the following code should solve your problem:
#this code will print the list of placeholders and other variables declared in the memory which is causing your error
[n.name for n in tf.get_default_graph().as_graph_def().node]
#it will reset your declared placeholders so you can start over
tf.reset_default_graph()
This problem could also be solved by restarting the kernel repeatedly for each debug however it's not feasible.
I try to reproduce results generated by the LSTMCell from TensorFlow to be sure that I know what it does.
Here is my TensorFlow code:
num_units = 3
lstm = tf.nn.rnn_cell.LSTMCell(num_units = num_units)
timesteps = 7
num_input = 4
X = tf.placeholder("float", [None, timesteps, num_input])
x = tf.unstack(X, timesteps, 1)
outputs, states = tf.contrib.rnn.static_rnn(lstm, x, dtype=tf.float32)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
x_val = np.random.normal(size = (1, 7, num_input))
res = sess.run(outputs, feed_dict = {X:x_val})
for e in res:
print e
Here is its output:
[[-0.13285545 -0.13569424 -0.23993783]]
[[-0.04818152 0.05927373 0.2558436 ]]
[[-0.13818116 -0.13837864 -0.15348436]]
[[-0.232219 0.08512601 0.05254192]]
[[-0.20371495 -0.14795329 -0.2261929 ]]
[[-0.10371902 -0.0263292 -0.0914975 ]]
[[0.00286371 0.16377522 0.059478 ]]
And here is my own implementation:
n_steps, _ = X.shape
h = np.zeros(shape = self.hid_dim)
c = np.zeros(shape = self.hid_dim)
for i in range(n_steps):
x = X[i,:]
vec = np.concatenate([x, h])
#vec = np.concatenate([h, x])
gs = np.dot(vec, self.kernel) + self.bias
g1 = gs[0*self.hid_dim : 1*self.hid_dim]
g2 = gs[1*self.hid_dim : 2*self.hid_dim]
g3 = gs[2*self.hid_dim : 3*self.hid_dim]
g4 = gs[3*self.hid_dim : 4*self.hid_dim]
I = vsigmoid(g1)
N = np.tanh(g2)
F = vsigmoid(g3)
O = vsigmoid(g4)
c = c*F + I*N
h = O * np.tanh(c)
print h
And here is its output:
[-0.13285543 -0.13569425 -0.23993781]
[-0.01461723 0.08060743 0.30876374]
[-0.13142865 -0.14921292 -0.16898363]
[-0.09892188 0.11739943 0.08772941]
[-0.15569218 -0.15165766 -0.21918869]
[-0.0480604 -0.00918626 -0.06084118]
[0.0963612 0.1876516 0.11888081]
As you might notice I was able to reproduce the first hidden vector, but the second one and all the following ones are different. What am I missing?
i examined this link and your code is almost perfect but you forgot to add forget_bias value(default 1.0) in this line F = vsigmoid(g3) its actualy F = vsigmoid(g3+self.forget_bias) or in your case its 1 F = vsigmoid(g3+1)
here is my imp with numpy:
import numpy as np
import tensorflow as tf
num_units = 3
lstm = tf.nn.rnn_cell.LSTMCell(num_units = num_units)
batch=1
timesteps = 7
num_input = 4
X = tf.placeholder("float", [batch, timesteps, num_input])
x = tf.unstack(X, timesteps, 1)
outputs, states = tf.contrib.rnn.static_rnn(lstm, x, dtype=tf.float32)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
x_val = np.reshape(range(28),[batch, timesteps, num_input])
res = sess.run(outputs, feed_dict = {X:x_val})
for e in res:
print(e)
print("\nmy imp\n")
#my impl
def sigmoid(x):
return 1/(1+np.exp(-x))
kernel,bias=sess.run([lstm._kernel,lstm._bias])
f_b_=lstm._forget_bias
c,h=np.zeros([batch,num_input-1]),np.zeros([batch,num_input-1])
for step in range(timesteps):
inpt=np.split(x_val,7,1)[step][0]
lstm_mtrx=np.matmul(np.concatenate([inpt,h],1),kernel)+bias
i,j,f,o=np.split(lstm_mtrx,4,1)
c=sigmoid(f+f_b_)*c+sigmoid(i)*np.tanh(j)
h=sigmoid(o)*np.tanh(c)
print(h)
output:
[[ 0.06964055 -0.06541953 -0.00682676]]
[[ 0.005264 -0.03234607 0.00014838]]
[[ 1.617855e-04 -1.316892e-02 8.596722e-06]]
[[ 3.9425286e-06 -5.1347450e-03 7.5078127e-08]]
[[ 8.7508155e-08 -1.9560163e-03 6.3853928e-10]]
[[ 1.8867894e-09 -7.3784427e-04 5.8551406e-12]]
[[ 4.0385355e-11 -2.7728223e-04 5.3957669e-14]]
my imp
[[ 0.06964057 -0.06541953 -0.00682676]]
[[ 0.005264 -0.03234607 0.00014838]]
[[ 1.61785520e-04 -1.31689185e-02 8.59672610e-06]]
[[ 3.94252745e-06 -5.13474567e-03 7.50781122e-08]]
[[ 8.75080644e-08 -1.95601574e-03 6.38539112e-10]]
[[ 1.88678843e-09 -7.37844070e-04 5.85513438e-12]]
[[ 4.03853841e-11 -2.77282006e-04 5.39576024e-14]]
Tensorflow uses glorot_uniform() function to initialize the lstm kernel, which samples weights from a random uniform distribution. We need to fix a value for the kernel to get reproducible results:
import tensorflow as tf
import numpy as np
np.random.seed(0)
timesteps = 7
num_input = 4
x_val = np.random.normal(size = (1, timesteps, num_input))
num_units = 3
def glorot_uniform(shape):
limit = np.sqrt(6.0 / (shape[0] + shape[1]))
return np.random.uniform(low=-limit, high=limit, size=shape)
kernel_init = glorot_uniform((num_input + num_units, 4 * num_units))
My implementation of the LSTMCell (well, actually it's just slightly rewritten tensorflow's code):
def sigmoid(x):
return 1. / (1 + np.exp(-x))
class LSTMCell():
"""Long short-term memory unit (LSTM) recurrent network cell.
"""
def __init__(self, num_units, initializer=glorot_uniform,
forget_bias=1.0, activation=np.tanh):
"""Initialize the parameters for an LSTM cell.
Args:
num_units: int, The number of units in the LSTM cell.
initializer: The initializer to use for the kernel matrix. Default: glorot_uniform
forget_bias: Biases of the forget gate are initialized by default to 1
in order to reduce the scale of forgetting at the beginning of
the training.
activation: Activation function of the inner states. Default: np.tanh.
"""
# Inputs must be 2-dimensional.
self._num_units = num_units
self._forget_bias = forget_bias
self._activation = activation
self._initializer = initializer
def build(self, inputs_shape):
input_depth = inputs_shape[-1]
h_depth = self._num_units
self._kernel = self._initializer(shape=(input_depth + h_depth, 4 * self._num_units))
self._bias = np.zeros(shape=(4 * self._num_units))
def call(self, inputs, state):
"""Run one step of LSTM.
Args:
inputs: input numpy array, must be 2-D, `[batch, input_size]`.
state: a tuple of numpy arrays, both `2-D`, with column sizes `c_state` and
`m_state`.
Returns:
A tuple containing:
- A `2-D, [batch, output_dim]`, numpy array representing the output of the
LSTM after reading `inputs` when previous state was `state`.
Here output_dim is equal to num_units.
- Numpy array(s) representing the new state of LSTM after reading `inputs` when
the previous state was `state`. Same type and shape(s) as `state`.
"""
num_proj = self._num_units
(c_prev, m_prev) = state
input_size = inputs.shape[-1]
# i = input_gate, j = new_input, f = forget_gate, o = output_gate
lstm_matrix = np.hstack([inputs, m_prev]).dot(self._kernel)
lstm_matrix += self._bias
i, j, f, o = np.split(lstm_matrix, indices_or_sections=4, axis=0)
# Diagonal connections
c = (sigmoid(f + self._forget_bias) * c_prev + sigmoid(i) *
self._activation(j))
m = sigmoid(o) * self._activation(c)
new_state = (c, m)
return m, new_state
X = x_val.reshape(x_val.shape[1:])
cell = LSTMCell(num_units, initializer=lambda shape: kernel_init)
cell.build(X.shape)
state = (np.zeros(num_units), np.zeros(num_units))
for i in range(timesteps):
x = X[i,:]
output, state = cell.call(x, state)
print(output)
Produces output:
[-0.21386017 -0.08401277 -0.25431477]
[-0.22243588 -0.25817422 -0.1612211 ]
[-0.2282134 -0.14207162 -0.35017249]
[-0.23286737 -0.17129192 -0.2706512 ]
[-0.11768674 -0.20717363 -0.13339118]
[-0.0599215 -0.17756104 -0.2028935 ]
[ 0.11437953 -0.19484555 0.05371994]
While your Tensorflow code, if you replace the second line with
lstm = tf.nn.rnn_cell.LSTMCell(num_units = num_units, initializer = tf.constant_initializer(kernel_init))
returns:
[[-0.2138602 -0.08401276 -0.25431478]]
[[-0.22243595 -0.25817424 -0.16122109]]
[[-0.22821338 -0.1420716 -0.35017252]]
[[-0.23286738 -0.1712919 -0.27065122]]
[[-0.1176867 -0.2071736 -0.13339119]]
[[-0.05992149 -0.177561 -0.2028935 ]]
[[ 0.11437953 -0.19484554 0.05371996]]
Here is a blog which will answer any conceptual questions related to LSTM's. Seems that there is a lot which goes into building an LSTM from scratch!
Of course, this answer doesn't solve your question but just giving a direction.
Considering Linear Algebra, it's possible to exist a dimension mismatch in the matrix multiplication between I*N (red circle), affecting the output, given that n x m dot m x p will give you a n x p dimensional output.
I have spent about two hours on this, but could not find the solution. The closes thing to what I need is probably this boolen mask, but I am still missing the next step.
My neural network wasn't learning so I started looking at every step it performs. And sure enough I found a problem. The problem lies in the fact that due to sparsity on my input layer I get too many bias terms propagated throughout. Uniqueness of my set up though is that the last time matrices will be zero matrices. Let me show you, I will first show a screenshot of my notebook and will then present the code.
screenshot:
I do not want bias terms added to where the whole time is a zeros matrix. I thought I could perhaps perform an op on the boolean mask filtered matrix?
Here is the code:
import tensorflow as tf
import numpy as np
dim = 4
# batch x time x events x dim
tensor = np.random.rand(1, 3, 4, dim)
zeros_last_time = np.zeros((4, dim))
tensor[0][2] = zeros_last_time
input_layer = tf.placeholder(tf.float64, shape=(None, None, 4, dim))
# These are supposed to perform operations on the non-zero times
Wn = tf.Variable(
tf.truncated_normal(dtype=dtype, shape=(dim,), mean=0, stddev=0.01),
name="Wn")
bn = tf.Variable(tf.truncated_normal(dtype=dtype, shape=(1,), mean=0,
stddev=0.01), name="bn")
# this is the op I want to be performed only on non-zero times
op = tf.einsum('bted,d->bte', input_layer, Wn) + bn
s = tf.Session()
glob_vars = tf.global_variables_initializer()
s.run(glob_vars)
# first let's see what the bias term is
s.run(bn, feed_dict={input_layer: tensor})
s.run(op, feed_dict={input_layer: tensor})
EDIT: So I believe tf.where is what I need.
Maybe a good solution can be the usage of tf.where to create a mask of zeros where the input is zero (in the last dimension) and is one otherwise.
Once we got this mask, we can just multiply it for the bias to get the result.
Here's my solution:
import tensorflow as tf
import numpy as np
dim = 4
# batch x time x events x dim
tensor = np.random.rand(1, 3, 4, dim)
zeros_last_time = np.zeros((4, dim))
tensor[0][2] = zeros_last_time
dtype = tf.float64
input_layer = tf.placeholder(tf.float64, shape=(None, None, 4, dim))
# These are supposed to perform operations on the non-zero times
Wn = tf.Variable(
tf.truncated_normal(dtype=dtype, shape=(dim,), mean=0, stddev=0.01),
name="Wn")
bn = tf.Variable(
tf.truncated_normal(dtype=dtype, shape=(1,), mean=0, stddev=0.01),
name="bn")
bias = bn * tf.cast(
tf.where(input_layer == tf.zeros(tf.shape(input_layer)[-1]),
tf.zeros(tf.shape(input_layer)[-1]),
tf.ones(tf.shape(input_layer)[-1])), dtype)
# this is the op I want to be performed only on non-zero times
op = tf.einsum('bted,d->bte', input_layer, Wn) + bias
s = tf.Session()
glob_vars = tf.global_variables_initializer()
s.run(glob_vars)
# first let's see what the bias term is
print(s.run(bn, feed_dict={input_layer: tensor}))
print(s.run(op, feed_dict={input_layer: tensor}))
I managed to get the right bias, but then noticed that the dimensions are messed up. So this is only a partial answer:
import tensorflow as tf
import numpy as np
dim = 4
# batch x time x events x dim
tensor = np.random.rand(1, 3, 4, dim)
zeros_last_time = np.zeros((4, dim))
tensor[0][2] = zeros_last_time
dtype = tf.float64
input_layer = tf.placeholder(dtype, shape=(None, None, 4, dim))
# These are supposed to perform operations on the non-zero times
Wn = tf.Variable(
tf.truncated_normal(dtype=dtype, shape=(dim,), mean=0, stddev=0.01),
name="Wn")
bn = tf.Variable(
tf.truncated_normal(dtype=dtype, shape=(1,), mean=0, stddev=0.01),
name="bn")
zeros = tf.equal(input_layer, tf.cast(tf.zeros(tf.shape(input_layer)[2:]),
tf.float64))
# bias
where_ = tf.where(zeros, tf.zeros(tf.shape(input_layer)),
tf.ones(tf.shape(input_layer)))
bias = bn * tf.cast(where_, tf.float64)
op = tf.einsum('bted,d->bte', input_layer, Wn) + bias # will fail
print(bias)
s = tf.Session()
glob_vars = tf.global_variables_initializer()
s.run(glob_vars)
s = tf.Session()
glob_vars = tf.global_variables_initializer()
s.run(glob_vars)
feed_dict = {input_layer: tensor}
s.run(bias, feed_dict)
and these two for bias do the job:
biases = tf.slice(biases, [0, 0, 0, 0], [1, 3, 1, 4])
squeezed_biases = tf.squeeze(biases)
I Wrote a Neural Network in TensorFlow for the XOR input. I have used 1 hidden layer with 2 units and softmax classification. The input is of the form <1, x_1, x_2, zero, one> , where
1 is the bias
x_1 and x_2 are either between 0 and 1 for all the combination {00, 01, 10, 11}. Selected to be normally distributed around 0 or 1
zero: is 1 if the output is zero
one: is 1 if the output is one
The accuracy is always around 0.5. What has gone wrong? Is the architecture of the neural network wrong, or is there something with the code?
import tensorflow as tf
import numpy as np
from random import randint
DEBUG=True
def init_weights(shape):
return tf.Variable(tf.random_normal(shape, stddev=0.01))
def model(X, weight_hidden, weight_output):
# [1,3] x [3,n_hiddent_units] = [1,n_hiddent_units]
hiddern_units_output = tf.nn.sigmoid(tf.matmul(X, weight_hidden))
# [1,n_hiddent_units] x [n_hiddent_units, 2] = [1,2]
return hiddern_units_output
#return tf.matmul(hiddern_units_output, weight_output)
def getHiddenLayerOutput(X, weight_hidden):
hiddern_units_output = tf.nn.sigmoid(tf.matmul(X, weight_hidden))
return hiddern_units_output
total_inputs = 100
zeros = tf.zeros([total_inputs,1])
ones = tf.ones([total_inputs,1])
around_zeros = tf.random_normal([total_inputs,1], mean=0, stddev=0.01)
around_ones = tf.random_normal([total_inputs,1], mean=1, stddev=0.01)
batch_size = 10
n_hiddent_units = 2
X = tf.placeholder("float", [None, 3])
Y = tf.placeholder("float", [None, 2])
weight_hidden = init_weights([3, n_hiddent_units])
weight_output = init_weights([n_hiddent_units, 2])
hiddern_units_output = getHiddenLayerOutput(X, weight_hidden)
py_x = model(X, weight_hidden, weight_output)
#cost = tf.square(Y - py_x)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=py_x, labels=Y))
train_op = tf.train.GradientDescentOptimizer(0.05).minimize(cost)
with tf.Session() as sess:
tf.global_variables_initializer().run()
trX_0_0 = sess.run(tf.concat([ones, around_zeros, around_zeros, ones, zeros], axis=1))
trX_0_1 = sess.run(tf.concat([ones, around_zeros, around_ones, zeros, ones], axis=1))
trX_1_0 = sess.run(tf.concat([ones, around_ones, around_zeros, zeros, ones], axis=1))
trX_1_1 = sess.run(tf.concat([ones, around_ones, around_ones, ones, zeros], axis=1))
trX = sess.run(tf.concat([trX_0_0, trX_0_1, trX_1_0, trX_1_1], axis=0))
trX = sess.run(tf.random_shuffle(trX))
print(trX)
for i in range(10):
for start, end in zip(range(0, len(trX), batch_size), range(batch_size, len(trX) + 1, batch_size)):
trY = tf.identity(trX[start:end,3:5])
trY = sess.run(tf.reshape(trY,[batch_size, 2]))
sess.run(train_op, feed_dict={ X: trX[start:end,0:3], Y: trY })
start_index = randint(0, (total_inputs*4)-batch_size)
y_0 = sess.run(py_x, feed_dict={X: trX[start_index:start_index+batch_size,0:3]})
print("iteration :",i, " accuracy :", np.mean(np.absolute(trX[start_index:start_index+batch_size,3:5]-y_0)),"\n")
Check the comments section for the updated code
The problem was with the randomly assigned weights. Here is the modified version, obtained after a series of trail-and-error.
I'm trying to build a pixel-wise classification LSTM RNN using tensorflow. My model is displayed in the picture below. The problem I'm having is building a 3D LSTM RNN. The code that I have builds a 2D LSTM RNN, so I placed the code inside a loop, but now I get the following error:
ValueError: Variable RNN/BasicLSTMCell/Linear/Matrix does not exist, disallowed. Did you mean to set reuse=None in VarScope?
So here's the network:
The idea goes like this... an input image of size (200,200) is the input into a LSTM RNN of size (200,200,200). Each sequence output from the LSTM tensor vector (the pink boxes in the LSTM RNN) is fed into a MLP, and then the MLP makes a single output prediction -- ergo pixel-wise prediction (you can see how one input pixel generates one output "pixel"
So here's my code:
...
n_input_x = 200
n_input_y = 200
x = tf.placeholder("float", [None, n_input_x, n_input_y])
y = tf.placeholder("float", [None, n_input_x, n_input_y])
def RNN(x):
x = tf.transpose(x, [1, 0, 2])
x = tf.reshape(x, [-1, n_input_x])
x = tf.split(0, n_steps, x)
output_matrix = []
for i in xrange(200):
temp_vector = []
lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
for j in xrange(200):
lstm_vector = outputs[j]
pixel_pred = multilayer_perceptron(lstm_vector, mlp_weights, mlp_biases)
temp_vector.append(pixel_pred)
output_matrix.append(temp_vector)
print i
return output_matrix
temp = RNN(x)
pred = tf.placeholder(temp, [None, n_input_x, n_input_y])
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
...
You can see that I placed the call to RNN inside the first loop. In this way, I generate a new RNN every time. I know Tensorflow auto-increments other Tensors.
debugging I have
(Pdb) lstm_cell
<tensorflow.python.ops.rnn_cell.BasicLSTMCell object at 0x7f9d26956850>
and then for outputs I have a vector of 200 BasicLSTMCells
(Pdb) len(outputs)
200
...
<tf.Tensor 'RNN_2/BasicLSTMCell_199/mul_2:0' shape=(?, 200) dtype=float32>]
So ideally, I want the second call to RNN to generate LSTM tensors with indexes 200-399
I tried this, but it won't construct a RNN because the dimensions of 40000 and x (after the split) don't line up.
x = tf.reshape(x, [-1, n_input_x])
# Split to get a list of 'n_steps' tensors of shape (batch_size, n_hidden)
# This input shape is required by `rnn` function
x = tf.split(0, n_input_y, x)
lstm_cell = rnn_cell.BasicLSTMCell(40000, forget_bias=1.0, state_is_tuple=True)
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
output_matrix = []
for i in xrange(200):
temp_vector = []
for j in xrange(200):
lstm_vector = outputs[i*j]
So then I also tried to get rid of the split, but then it complains that it must be a list. So then I tried reshaping x = tf.reshape(x, [n_input_x * n_input_y]) but then it still says it must be a list