I'm quite new to tensorflow and I can't quite get how to shape tensors so that I would get the output as a single number. Basically, my recurrent network should guess the next number. Instead, with each prediction it returns me a list with five numbers? I guess either one or more of my tensors are misshaped.
My input data is formatted to be around 2000 lists with 5 features each like this:
[
np.array ([
[1],[2],[3],[4],[5]
])
]
This is the code for the RNN:
cell_units = 400
batch_size = 5
no_of_epochs = 500
data = tf.placeholder (tf.float32, [None, 5, 1])
target = tf.placeholder (tf.float32, [None, 1, 1])
weight = tf.Variable (tf.random_normal ([cell_units, 5, 1]))
bias = tf.Variable (tf.random_normal([1, 1]))
cell = tf.contrib.rnn.BasicRNNCell (num_units = cell_units)
output, states = tf.nn.dynamic_rnn (cell, data, dtype=tf.float32)
output = tf.transpose (output, [1, 0, 2])
activation = tf.matmul (output, weight) + bias
cost = tf.reduce_mean (
(
tf.log (tf.square (activation - target))
)
)
optimizer = tf.train.AdamOptimizer (learning_rate = 0.01).minimize(cost)
with tf.Session () as sess:
sess.run (tf.global_variables_initializer ())
no_of_batches = int (len (train_x) / batch_size)
for i in range(no_of_epochs):
start = 0
for j in range(no_of_batches):
inp = train_x [start:start+batch_size]
out = train_y [start:start+batch_size]
start += batch_size
sess.run (optimizer, {data: inp, target: out})
tf.nn.dynamic_rnn expects inputs of shape [batch_size, max_time, ...]. In your example batch_size is dynamic (i.e., unknown) and max_time is 5 (i.e., number of time steps.). Naturally RNN's output contains 5 entries, one per input step: [None, 5, cell_units].
As #Ishant Mrinal suggested you can select the last output step.
weight = tf.Variable (tf.random_normal ([cell_units, 1]))
bias = tf.Variable (tf.random_normal([1, 1]))
cell = tf.contrib.rnn.BasicRNNCell (num_units = cell_units)
output, states = tf.nn.dynamic_rnn (cell, data, dtype=tf.float32)
# Get the last step (4th index).
output = tf.squeeze(tf.transpose (output, [0, 2, 1])[:,:,4]) # Shape of [batch_size, cell_units].
activation = tf.matmul (output, weight) + bias
activation has shape of [batch_size, 1].
Related
I have written the following code in my Pycharm which does Fully Connect Layer (FCL) in Tensorflow. The placeholder happens invalid argument error. So I entered all the dtype, shape, and name in the placeholder, but I still get invalid argument error.
I want to make new Signal(1, 222) through FCL model.
input Signal(1, 222) => output Signal(1, 222)
maxPredict: Find the index with the highest value in the output signal.
calculate Y: Get the frequency array value corresponding to maxPredict.
loss: Use the difference between true Y and calculate Y as a loss.
loss = tf.abs(trueY - calculateY)`
Code (occur Error)
x = tf.placeholder(dtype=tf.float32, shape=[1, 222], name='inputX')
ERROR
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'inputX' with dtype float and shape [1,222]
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'inputX' with dtype float and shape [1,222]
[[{{node inputX}} = Placeholderdtype=DT_FLOAT, shape=[1,222], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
During handling of the above exception, another exception occurred:
New Error Case
I changed my Code.
x = tf.placeholder(tf.float32, [None, 222], name='inputX')
Error Case 1
tensorFreq = tf.convert_to_tensor(basicFreq, tf.float32)
newY = tf.gather(tensorFreq, maxPredict) * 60
loss = tf.abs(y - tf.Variable(newY))
ValueError: initial_value must have a shape specified: Tensor("mul:0", shape=(?,), dtype=float32)
Error Case 2
tensorFreq = tf.convert_to_tensor(basicFreq, tf.float32)
newY = tf.gather(tensorFreq, maxPredict) * 60
loss = tf.abs(y - newY)
Traceback (most recent call last):
File "D:/PycharmProject/DetectionSignal/TEST_FCL_StackOverflow.py", line 127, in
trainStep = opt.minimize(loss)
File "C:\Users\Heewony\Anaconda3\envs\TSFW_pycharm\lib\site-packages\tensorflow\python\training\optimizer.py", line 407, in minimize
([str(v) for _, v in grads_and_vars], loss))
ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables [tf.Variable 'Variable:0' shape=(222, 1024) dtype=float32_ref, tf.Variable 'Variable_1:0' shape=(1024,) dtype=float32_re, ......... tf.Variable 'Variable_5:0' shape=(222,) dtype=float32_ref] and loss Tensor("Abs:0", dtype=float32).
Development environment
OS Platform and Distribution: Windows 10 x64
TensorFlow installed from: Anaconda
Tensorflow version 1.12.0:
python 3.6.7 :
Mobile device: N/A
Exact command to reproduce: N/A
GPU model and memory: NVIDIA GeForce CTX 1080 Ti
CUDA/cuDNN: 9.0/7.4
Model and Function
def Model_FCL(inputX):
data = inputX # input Signals
# Fully Connected Layer 1
flatConvh1 = tf.reshape(data, [-1, 222])
fcW1 = tf.Variable(tf.truncated_normal(shape=[222, 1024], stddev=0.05))
fcb1 = tf.Variable(tf.constant(0.1, shape=[1024]))
fch1 = tf.nn.relu(tf.matmul(flatConvh1, fcW1) + fcb1)
# Fully Connected Layer 2
flatConvh2 = tf.reshape(fch1, [-1, 1024])
fcW2 = tf.Variable(tf.truncated_normal(shape=[1024, 1024], stddev=0.05))
fcb2 = tf.Variable(tf.constant(0.1, shape=[1024]))
fch2 = tf.nn.relu(tf.matmul(flatConvh2, fcW2) + fcb2)
# Output Layer
fcW3 = tf.Variable(tf.truncated_normal(shape=[1024, 222], stddev=0.05))
fcb3 = tf.Variable(tf.constant(0.1, shape=[222]))
logits = tf.add(tf.matmul(fch2, fcW3), fcb3)
predictY = tf.nn.softmax(logits)
return predictY, logits
def loadMatlabData(fileName):
contentsMat = sio.loadmat(fileName)
dataInput = contentsMat['dataInput']
dataLabel = contentsMat['dataLabel']
dataSize = dataInput.shape
dataSize = dataSize[0]
return dataInput, dataLabel, dataSize
def getNextSignal(num, data, labels, WINDOW_SIZE, OUTPUT_SIZE):
shuffleSignal = data[num]
shuffleLabels = labels[num]
# shuffleSignal = shuffleSignal.reshape(1, WINDOW_SIZE)
# shuffleSignal = np.asarray(shuffleSignal, np.float32)
return shuffleSignal, shuffleLabels
def getBasicFrequency():
# basicFreq => shape(222)
basicFreq = np.array([0.598436736688, 0.610649731314, ... 3.297508549096])
return basicFreq
Graph
basicFreq = getBasicFrequency()
myGraph = tf.Graph()
with myGraph.as_default():
# define input data & output data 입력받기 위한 placeholder
x = tf.placeholder(dtype=tf.float32, shape=[1, 222], name='inputX') # Signal size = [1, 222]
y = tf.placeholder(tf.float32, name='trueY') # Float value size = [1]
print('inputzz ', x, y)
print('Graph ', myGraph.get_operations())
print('TrainVariable ', tf.trainable_variables())
predictY, logits = Model_FCL(x) # Predict Signal, size = [1, 222]
maxPredict = tf.argmax(predictY, 1, name='maxPredict') # Find max index of Predict Signal
tensorFreq = tf.convert_to_tensor(basicFreq, tf.float32)
newY = tf.gather(tensorFreq, maxPredict) * 60 # Find the value that corresponds to the Freq array index
loss = tf.abs(y - tf.Variable(newY)) # Calculate absolute (true Y - predict Y)
opt = tf.train.AdamOptimizer(learning_rate=0.0001)
trainStep = opt.minimize(loss)
print('Graph ', myGraph.get_operations())
print('TrainVariable ', tf.trainable_variables())
Session
with tf.Session(graph=myGraph) as sess:
sess.run(tf.global_variables_initializer())
dataFolder = './'
writer = tf.summary.FileWriter('./logMyGraph', sess.graph)
startTime = datetime.datetime.now()
numberSummary = 0
accuracyTotalTrain = []
for trainEpoch in range(1, 25 + 1):
arrayTrain = []
dataPPG, dataLabel, dataSize = loadMatlabData(dataFolder + "TestValues.mat")
for i in range(dataSize):
batchSignal, valueTrue = getNextSignal(i, dataPPG, dataLabel, 222, 222)
_, lossPrint, valuePredict = sess.run([trainStep, loss, newY], feed_dict={x: batchSignal, y: valueTrue})
print('Train ', i, ' ', valueTrue, ' - ', valuePredict, ' Loss ', lossPrint)
arrayTrain.append(lossPrint)
writer.add_summary(tf.Summary(value=[tf.Summary.Value(tag='Loss', simple_value=float(lossPrint))]),
numberSummary)
numberSummary += 1
accuracyTotalTrain.append(np.mean(arrayTrain))
print('Final Train : ', accuracyTotalTrain)
sess.close()
It seems that the variable batchSignal is of a wrong type or shape. It must be a numpy array of shape exactly [1, 222]. If you want to use a batch of examples of size n × 222, the placeholder x should have a shape of [None, 222] and placeholder y shape [None].
By the way, consider using tf.layers.dense instead of explicitly initializing variables and implementing the layers yourself.
There should be two things to change.
Error Case 0. You don't need to reshape your flow between layers. You can use None at the first dimension to pass a dynamic batch size.
Error Case 1. You can use directly your newY as output of the NN. You only use tf.Variable to define weights or bias.
Error Case 2. And it seems that tensorflow doesn't have gradient descent implementation for neither tf.abs() nor tf.gather(). With a regression problem, the mean square error is often sufficient.
Herein, how I rewrite your code. I don't have your matlab part so I can't debug your python/matlab interface:
Model:
def Model_FCL(inputX):
# Fully Connected Layer 1
fcW1 = tf.get_variable('w1', shape=[222, 1024], initializer=tf.initializer.truncated_normal())
fcb1 = tf.get_variable('b1', shape=[222], initializer=tf.initializer.truncated_normal())
# fcb1 = tf.get_variable('b1', shape=[None, 222], trainable=False, initializer=tf.constant_initializer(valueThatYouWant)) # if you want to fix your bias constant
fch1 = tf.nn.relu(tf.matmul(inputX, fcW1) + fcb1, name='relu1')
# Fully Connected Layer 2
fcW2 = tf.get_variable('w2', shape=[1024, 1024], initializer=tf.initializer.truncated_normal())
fcb2 = tf.get_variable('b2', shape=[222], initializer=tf.initializer.truncated_normal())
# fcb2 = tf.get_variable('b2', shape=[None, 222], trainable=False, initializer=tf.constant_initializer(valueThatYouWant)) # if you want to fix your bias constant
fch2 = tf.nn.relu(tf.matmul(fch1, fcW2) + fcb2, name='relu2')
# Output Layer
fcW3 = tf.get_variable('w3', shape=[1024, 222], initializer=tf.initializer.truncated_normal())
fcb3 = tf.get_variable('b3', shape=[222], initializer=tf.initializer.truncated_normal())
# fcb2 = tf.get_variable('b2', shape=[None, 222], trainable=False, initializer=tf.constant_initializer(valueThatYouWant)) # if you want to fix your bias constant
logits = tf.add(tf.matmul(fch2, fcW3), fcb3)
predictY = tf.nn.softmax(logits) #I'm not sure that it will learn if you do softmax then abs/MSE
return predictY, logits
Graph:
with myGraph.as_default():
# define input data & output data 입력받기 위한 placeholder
# put None(dynamic batch size) not -1 at the first dimension so that you can change your batch size
x = tf.placeholder(tf.float32, shape=[None, 222], name='inputX') # Signal size = [1, 222]
y = tf.placeholder(tf.float32, shape=[None], name='trueY') # Float value size = [1]
...
predictY, logits = Model_FCL(x) # Predict Signal, size = [1, 222]
maxPredict = tf.argmax(predictY, 1, name='maxPredict') # Find max index of Predict Signal
tensorFreq = tf.convert_to_tensor(basicFreq, tf.float32)
newY = tf.gather(tensorFreq, maxPredict) * 60 # Find the value that corresponds to the Freq array index
loss = tf.losses.mean_squared_error(labels=y, predictions=newY) # maybe use MSE for regression problem
# loss = tf.abs(y - newY) # Calculate absolute (true Y - predict Y) #tensorflow doesn't have gradient descent implementation for tf.abs
opt = tf.train.AdamOptimizer(learning_rate=0.0001)
trainStep = opt.minimize(loss)
If you are still getting the same error even after feeding the right numpy shape and also maintaining the correct dtypes (np.int32 or np.float32) as suggested by the error message, then the following code should solve your problem:
#this code will print the list of placeholders and other variables declared in the memory which is causing your error
[n.name for n in tf.get_default_graph().as_graph_def().node]
#it will reset your declared placeholders so you can start over
tf.reset_default_graph()
This problem could also be solved by restarting the kernel repeatedly for each debug however it's not feasible.
I am trying to implement a Fully Convolutional network (5 layers) in TensorFlow.
But after few time of training, all my logits fall to 0.
Did anyone have the same problem before ?
Here is how I implemented my CONV-ReLU-maxPOOL layer :
def conv_relu_layer (in_data, nb_filters, filter_shape) :
nb_in_channels = int (in_data_reshaped.shape[3])
conv_shape = [filter_shape[0], filter_shape[1],
nb_in_channels, nb_filters]
weights = tf.Variable (
tf.truncated_normal (conv_shape, mean=0., stddev=.05))
bias = tf.Variable (
tf.truncated_normal ([nb_filters], mean=0., stddev=1.))
output = tf.nn.conv2d (in_data_reshaped, weights,
[1,1,1,1], padding="SAME")
output += bias
output = tf.nn.relu (output)
return output
def conv_relu_pool_layer (in_data, nb_filters, filter_shape, pool_shape,
pooling=tf.nn.max_pool) :
conv_out = conv_relu_layer (in_data, nb_filters, filter_shape)
ksize = [1, pool_shape[0], pool_shape[1], 1]
strides = [1, pool_shape[0], pool_shape[1], 1]
return pooling (conv_out, ksize=ksize, strides=strides, padding="SAME")
Here is my network :
def create_network_5C (in_data, name="5C") :
c1 = conv_relu_pool_layer (in_data, 64, [5,5], [2,2])
c2 = conv_relu_pool_layer (c1, 128, [5,5], [2,2])
c3 = conv_relu_pool_layer (c2, 256, [5,5], [2,2])
c4 = conv_relu_pool_layer (c3, 64, [5,5], [2,2])
return conv_relu_layer (c4, 2, [5,5])
The loss function :
def loss (logits, labels, num_classes) :
with tf.name_scope('loss'):
logits = tf.reshape(logits, (-1, num_classes))
epsilon = tf.constant(value=1e-4)
labels = tf.to_float(tf.reshape(labels, (-1, num_classes)))
softmax = tf.nn.softmax(logits) + epsilon
cross_entropy = - tf.reduce_sum (
tf.multiply (labels * tf.log (softmax), head),
reduction_indices=[1])
cross_entropy_mean = tf.reduce_mean (cross_entropy)
tf.add_to_collection('losses', cross_entropy_mean)
loss = tf.add_n(tf.get_collection('losses'))
return loss
My main routine :
batch_size = 5
# Load data
x = tf.placeholder (tf.float32, [None, 416, 416, 3], name="x")
y = tf.placeholder (tf.float32, [None, 416, 416, 1], name="y")
# Contrast normalization and computation
x_gcn = tf.map_fn (lambda img : tf.image.per_image_standardization (img), x)
logits = create_network_5C (x_gcn)
# Having label at the same dimension as the output
y_p = tf.nn.avg_pool (tf.sign (y),
ksize=[1,16,16,1], strides=[1,16,16,1], padding="SAME")
y_rshp = tf.reshape (y_p, [batch_size, 416//16, 416//16])
y_bin = tf.cast (y_rshp > .5, tf.int32)
y_1hot = tf.one_hot (y_bin, 2)
# Compute error
error = loss (logits, y_1hot, 2)
optimizer = tf.train.AdamOptimizer (learning_rate=args.eta).minimize (error)
# Run the session
init_op = tf.global_variables_initializer ()
with tf.Session () as session :
session.run (init_op)
err, _ = session.run ([error, optimizer],
feed_dict={ x: image_batch,
y: label_batch })
I note that, if I reduce my network to 2 layers only, it won't drop the logits to 0, but it won't learn anything either. If I reduce it to 3 layers, it will drop to 0, but after a many iterations (while 5 layers drop to 0 in few batches).
Can this be linked to what is called "gradient vanish" ?
If it's relevant, my spec are : Ubuntu 16.04 - Python 3.6.4 - tensorflow 1.6.0
[EDIT] My problem really look like dead-ReLU, as mentioned here : StackOverflow : FCN training error, but my data is normalized (between something like -2 and +2, and I already tried to change the mean and stddev initial value of my weights and biases
[EDIT 2] I tried to replace the ReLUs with Leaky ReLU, or a softplus, in both cases, logits get stucked under 0.1 and loss stay between 0.6 and 0.7
Using some leaky relu was actually enough, then I just needed to let him train for a hudge amount of time.
I'm making a LSTM neural network in Tensorflow.
The input tensor size is 92.
import tensorflow as tf
from tensorflow.contrib import rnn
import data
test_x, train_x, test_y, train_y = data.get()
# Parameters
learning_rate = 0.001
epochs = 100
batch_size = 64
display_step = 10
# Network Parameters
n_input = 28 # input size
n_hidden = 128 # number of hidden layers
n_classes = 20 # output size
# Placeholders
x = tf.placeholder(dtype=tf.float32, shape=[None, n_input])
y = tf.placeholder(dtype=tf.float32, shape=[None, n_classes])
# Network
def LSTM(x):
W = tf.Variable(tf.random_normal([n_hidden, n_classes]), dtype=tf.float32) # weights
b = tf.Variable(tf.random_normal([n_classes]), dtype=tf.float32) # biases
x_shape = 92
x = tf.transpose(x)
x = tf.reshape(x, [-1, n_input])
x = tf.split(x, x_shape)
lstm = rnn.BasicLSTMCell(
num_units=n_hidden,
forget_bias=1.0
)
outputs, states = rnn.static_rnn(
cell=lstm,
inputs=x,
dtype=tf.float32
)
output = tf.matmul( outputs[-1], W ) + b
return output
# Train Network
def train(x):
prediction = LSTM(x)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(prediction, feed_dict={"x": train_x})
print(output)
train(x)
I'm not getting any errors, but I'm feeding an input tensor of size 92, and the matrix multiplication in the LSTM function returns a list containing one result vector, when the desired amount is 92, one result vector per input.
Is the problem that I'm matrix multiplying only the last item in the outputs array? Like this:
output = tf.matmul( outputs[-1], W ) + b
instead of:
output = tf.matmul( outputs, W ) + b
This is the error I get when I do the latter:
ValueError: Shape must be rank 2 but is rank 3 for 'MatMul' (op: 'MatMul') with input shapes: [92,?,128], [128,20].
static_rnn for making the simplest recurrent neural net.Here's the tf documentation.So the input to it should be a sequence of tensors. Let's say you want to input 4 words calling "Hi","how","Are","you". So your input place holder should consist of four n(size of each input vector) dimensional vectors corresponding to each words.
I think there's something wrong with your place holder. You should initialize it with number of inputs to the RNN. 28 is number of dimensions in each vector. I believe 92 is the length of the sequence. (more like 92 lstm cell)
In the output list you will get set of vectors equal to length of sequence each of size equal to number of hidden units.
I am trying to use LSTM with inputs with different time steps (different number of frames). The input to the rnn.static_rnn should be a sequence of tf (not a tf!). So, I should convert my input to sequence. I tried to use tf.unstack and tf.split, but both of them need to know exact size of inputs, while one dimension of my inputs (time steps) is changing by different inputs. following is part of my code:
n_input = 256*256 # data input (img shape: 256*256)
n_steps = None # timesteps
batch_size = 1
# tf Graph input
x = tf.placeholder("float", [ batch_size , n_input,n_steps])
y = tf.placeholder("float", [batch_size, n_classes])
# Permuting batch_size and n_steps
x1 = tf.transpose(x, [2, 1, 0])
x1 = tf.transpose(x1, [0, 2, 1])
x3=tf.unstack(x1,axis=0)
#or x3 = tf.split(x2, ?, 0)
# Define a lstm cell with tensorflow
lstm_cell = rnn.BasicLSTMCell(num_units=n_hidden, forget_bias=1.0)
# Get lstm cell output
outputs, states = rnn.static_rnn(lstm_cell, x3, dtype=tf.float32,sequence_length=None)
I got following error when I am using tf.unstack:
ValueError: Cannot infer num from shape (?, 1, 65536)
Also, there are some discussions here and here, but none of them were useful for me. Any help is appreciated.
As explained in here, tf.unstack does not work if the argument is unspecified and non-inferrable.
In your code, after transpositions, x1 has the shape of [ n_steps, batch_size, n_input] and its value at axis=0 is set to None.
I find two kinds of implementations of RNN in tensorflow.
The first implementations is this (from line 124 to 129). It uses a loop to define each step of input in RNN.
with tf.variable_scope("RNN"):
for time_step in range(num_steps):
if time_step > 0: tf.get_variable_scope().reuse_variables()
(cell_output, state) = cell(inputs[:, time_step, :], state)
outputs.append(cell_output)
states.append(state)
The second implementation is this (from line 51 to 70). It doesn't use any loop to define each step of input in RNN.
def RNN(_X, _istate, _weights, _biases):
# input shape: (batch_size, n_steps, n_input)
_X = tf.transpose(_X, [1, 0, 2]) # permute n_steps and batch_size
# Reshape to prepare input to hidden activation
_X = tf.reshape(_X, [-1, n_input]) # (n_steps*batch_size, n_input)
# Linear activation
_X = tf.matmul(_X, _weights['hidden']) + _biases['hidden']
# Define a lstm cell with tensorflow
lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0)
# Split data because rnn cell needs a list of inputs for the RNN inner loop
_X = tf.split(0, n_steps, _X) # n_steps * (batch_size, n_hidden)
# Get lstm cell output
outputs, states = rnn.rnn(lstm_cell, _X, initial_state=_istate)
# Linear activation
# Get inner loop last output
return tf.matmul(outputs[-1], _weights['out']) + _biases['out']
In the first implementation, I find there is no weight matrix between input unit to hidden unit, only define weight matrix between hidden unit to out put unit (from line 132 to 133)..
output = tf.reshape(tf.concat(1, outputs), [-1, size])
softmax_w = tf.get_variable("softmax_w", [size, vocab_size])
softmax_b = tf.get_variable("softmax_b", [vocab_size])
logits = tf.matmul(output, softmax_w) + softmax_b
But in the second implementation, both of the weight matrices are defined (from line 42 to 47).
weights = {
'hidden': tf.Variable(tf.random_normal([n_input, n_hidden])), # Hidden layer weights
'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))
}
biases = {
'hidden': tf.Variable(tf.random_normal([n_hidden])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
I wonder why?
The difference I noticed is that the code in the second implementation uses tf.nn.rnn which takes list of inputs for each time step and generated the list of outputs for each time step.
(Inputs: A length T list of inputs, each a tensor of shape
[batch_size, input_size].)
So, if you check the code in the second implementation on line 62 the input data is shaped into n_steps * (batch_size, n_hidden)
# Split data because rnn cell needs a list of inputs for the RNN inner loop
_X = tf.split(0, n_steps, _X) # n_steps * (batch_size, n_hidden)
In the 1st implementation they are looping through the n_time_steps and providing the input and get the corresponding output and storing in the outputs list.
Code snippet from line 113 to 117
outputs = []
state = self._initial_state
with tf.variable_scope("RNN"):
for time_step in range(num_steps):
if time_step > 0: tf.get_variable_scope().reuse_variables()
(cell_output, state) = cell(inputs[:, time_step, :], state)
outputs.append(cell_output)
Coming to your second question:
If you carefully notice the way the inputs are being fed to the RNN in both the implementations.
In the first implementation the inputs are already of shape batch_size x num_steps (here num_steps is hidden size):
self._input_data = tf.placeholder(tf.int32, [batch_size, num_steps])
Whereas in the second implementation the initial inputs are of shape (batch_size x n_steps x n_input). So a weight matrix is required to transform to the shape (n_steps x batch_size x hidden_size):
# Input shape: (batch_size, n_steps, n_input)
_X = tf.transpose(_X, [1, 0, 2]) # Permute n_steps and batch_size
# Reshape to prepare input to hidden activation
_X = tf.reshape(_X, [-1, n_input]) # (n_steps*batch_size, n_input)
# Linear activation
_X = tf.matmul(_X, _weights['hidden']) + _biases['hidden']
# Split data because rnn cell needs a list of inputs for the RNN inner loop
_X = tf.split(0, n_steps, _X) # n_steps * (batch_size, n_hidden)
I hope this is helpful...