In a classification problem with many classes, tensorflow docs suggests using sampled_softmax_loss over a simple softmax to reduce training runtime.
According to the docs and source (line 1180), the call pattern for sampled_softmax_loss is:
tf.nn.sampled_softmax_loss(weights, # Shape (num_classes, dim) - floatXX
biases, # Shape (num_classes) - floatXX
labels, # Shape (batch_size, num_true) - int64
inputs, # Shape (batch_size, dim) - floatXX
num_sampled, # - int
num_classes, # - int
num_true=1,
sampled_values=None,
remove_accidental_hits=True,
partition_strategy="mod",
name="sampled_softmax_loss")
It's unclear (at least to me) how to convert a real world problem into the shapes that this loss function requires. I think the 'inputs' field is the problem.
Here is a copy-paste-ready minimum working example that throws a matrix multiplication shape error when calling the loss function.
import tensorflow as tf
# Network Parameters
n_hidden_1 = 256 # 1st layer number of features
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
# Dependent & Independent Variable Placeholders
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes]) #
# Weights and Biases
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_hidden_1, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Super simple model builder
def tiny_perceptron(x, weights, biases):
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
out_layer = tf.matmul(layer_1, weights['out']) + biases['out']
return out_layer
# Create the model
pred = tiny_perceptron(x, weights, biases)
# Set up loss function inputs and inspect their shapes
w = tf.transpose(weights['out'])
b = biases['out']
labels = tf.reshape(tf.argmax(y, 1), [-1,1])
inputs = pred
num_sampled = 3
num_true = 1
num_classes = n_classes
print('Shapes\n------\nw:\t%s\nb:\t%s\nlabels:\t%s\ninputs:\t%s' % (w.shape, b.shape, labels.shape, inputs.shape))
# Shapes
# ------
# w: (10, 256) # Requires (num_classes, dim) - CORRECT
# b: (10,) # Requires (num_classes) - CORRECT
# labels: (?, 1) # Requires (batch_size, num_true) - CORRECT
# inputs: (?, 10) # Requires (batch_size, dim) - Not sure
loss_function = tf.reduce_mean(tf.nn.sampled_softmax_loss(
weights=w,
biases=b,
labels=labels,
inputs=inputs,
num_sampled=num_sampled,
num_true=num_true,
num_classes=num_classes))
The final line triggers and ValueError, stating that you cant multiply tensors with shape (?,10) and (?,256). As a general rule, I'd agree with that statement. Full error shown below:
ValueError: Dimensions must be equal, but are 10 and 256 for 'sampled_softmax_loss_2/MatMul_1' (op: 'MatMul') with input shapes: [?,10], [?,256].
If the 'dim' value from tensorflow docs is intended to be constant, either the 'weights' or 'inputs' variables going into the loss function are incorrect.
Any thoughts would be awesome, I'm totally stumped on how to use this loss function correctly & it would have a huge impact on training time for the model we're using it for (500k classes). Thanks!
---EDIT---
It is possible to get the sample shown above to run without errors by playing with parameters and ignoring the sampled_softmax_loss call pattern's expected inputs. If you do that, it results in a trainable model that has 0 impact on prediction accuracy (as you would expect).
In your softmax layer you are multiplying your network predictions, which have dimension (num_classes,) by your w matrix which has dimension (num_classes, num_hidden_1), so you end up trying to compare your target labels of size (num_classes,) to something that is now size (num_hidden_1,). Change your tiny perceptron to output layer_1 instead, then change the definition of your cost. The code below might do the trick.
def tiny_perceptron(x, weights, biases):
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
return layer_1
layer_1 = tiny_perceptron(x, weights, biases)
loss_function = tf.reduce_mean(tf.nn.sampled_softmax_loss(
weights=weights['h1'],
biases=biases['b1'],
labels=labels,
inputs=layer_1,
num_sampled=num_sampled,
num_true=num_true,
num_classes=num_classes))
When you train your network with some optimizer, you will tell it to minimize loss_function, which should mean that it will adjust both sets of weights and biases.
The key point is to pass right shape of weight, bias, input and label. The shape of weight passed to sampled_softmax is not the the same with the general situation.
For example, logits = xw + b, call sampled_softmax like this:
sampled_softmax(weight=tf.transpose(w), bias=b, inputs=x), NOT sampled_softmax(weight=w, bias=b, inputs=logits)!!
Besides, label is not one-hot representation. if your labels are one-hot represented, pass labels=tf.reshape(tf.argmax(labels_one_hot, 1), [-1,1])
Related
I am trying to extend the existing VAE network by leaking input to the variable for an autoregressive time series problem. For the same, I've used LSTM block and model works if I have standard network without Concatenate or Add layer. However, accuracy is not so great so I'd like to make it autoregressive in a sense that for synthesizing timeseries always take mu and sigma along with the previous samples as shown in the image.
Therefore, following code adds the input with an additional layer to match the dimension.
from tensorflow import keras
import tensorflow as tf
from tensorflow.keras import layers
class Sampling(layers.Layer):
"""Uses (z_mean, z_log_var) to sample z."""
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.keras.backend.random_normal(shape=(batch, dim),
mean=0., stddev=1.)
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
# Build encoder and decoder
latent_dim = 2
input_shape = (4, 96)
encoder_input = keras.Input(shape=input_shape)
enc = layers.LSTM(64)(encoder_input)
z_mean = layers.Dense(latent_dim, name="z_mean")(enc)
z_log_sigma = layers.Dense(latent_dim, name="z_log_var")(enc)
z = Sampling()([z_mean, z_log_sigma])
encoder = keras.Model(encoder_input, [z_mean, z_log_sigma, z], name="encoder")
encoder.summary()
# Extra layers for leaky input to decoder
leak_input_dec_ip = keras.Input(shape=input_shape)
leak_input_dec = layers.LSTM(2)(leak_input_dec_ip)
# Decoder where we have inputs from leaky input and sampling layer
inp_z = keras.Input(shape=(latent_dim,))
dec = layers.Concatenate()([leak_input_dec, inp_z]) # or Add
dec = layers.RepeatVector(96)(inp_z)
dec = layers.LSTM(64, return_sequences=True)(dec)
out = layers.TimeDistributed(layers.Dense(96))(dec)
decoder = keras.Model([leak_input_dec, inp_z], out)
decoder.summary()
But, it keeps giving the error ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_39:0", shape=(None, 4, 96), dtype=float32) at layer "lstm_29". The following previous layers were accessed without issue: ['repeat_vector_11', 'lstm_30', 'time_distributed_8']
The TensorFlow documentation for tf.nn.sparse_softmax_cross_entropy_with_logits explicitly declares that I should not apply softmax to the inputs of this op:
This op expects unscaled logits, since it performs a softmax on logits
internally for efficiency. Do not call this op with the output of
softmax, as it will produce incorrect results.
However if I use cross entropy without softmax it gives me unexpected results. According to CS231n course the expected loss value is around 2.3 for CIFAR-10:
For example, for CIFAR-10 with a Softmax classifier we would expect
the initial loss to be 2.302, because we expect a diffuse probability
of 0.1 for each class (since there are 10 classes), and Softmax loss
is the negative log probability of the correct class so: -ln(0.1) =
2.302.
However without softmax I get much bigger values, for example 108.91984.
What exactly am I doing wrong with sparse_softmax_cross_entropy_with_logits? The TF code is shown below.
import tensorflow as tf
import numpy as np
from tensorflow.python import keras
(_, _), (x_test, y_test) = keras.datasets.cifar10.load_data()
x_test = np.reshape(x_test, [-1, 32, 32, 3])
y_test = np.reshape(y_test, (10000,))
y_test = y_test.astype(np.int32)
x = tf.placeholder(dtype=tf.float32, shape=(None, 32, 32, 3))
y = tf.placeholder(dtype=tf.int32, shape=(None,))
layer = tf.layers.Conv2D(filters=16, kernel_size=3)(x)
layer = tf.nn.relu(layer)
layer = tf.layers.Flatten()(layer)
layer = tf.layers.Dense(units=1000)(layer)
layer = tf.nn.relu(layer)
logits = tf.layers.Dense(units=10)(layer)
# If this line is uncommented I get expected value around 2.3
# logits = tf.nn.softmax(logits)
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
logits=logits)
loss = tf.reduce_mean(loss, name='cross_entropy')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
res = sess.run(loss, feed_dict={x: x_test[0:256], y: y_test[0:256]})
print("loss: ", res)
# Expected output is value close to 2.3
# Real outputs are 108.91984, 72.82324, etc.
The issue is not in the lines
# If this line is uncommented I get expected value around 2.3
# logits = tf.nn.softmax(logits)
Images in cifar10 dataset are in RGB, thus pixel values are in range [0, 256). If you divide your x_test by 255
x_test = np.reshape(x_test, [-1, 32, 32, 3]).astype(np.float32) / 255
the values will be rescaled to [0,1] and tf.nn.sparse_softmax_cross_entropy_with_logits will return expected values
I have a problem with using diffrent dataset then default from tensorflow.
I have code using MNIST dataset to recognize digits. In this application there is generated graph, which is imported later by android app.
Now I would like to recognize digits and math's operators (basic one: +, -, *, /).
I found script to generate data I need. I have two .pickle files.
But even with the dataset which suits for me, still I don't know how to import this dataset to my app with tensorflow.
I would be grateful for help with this or maybe to give me other (maybe easier) solution.
EDIT
I did some changes in the code which were adviced by gabriele.
Now I have error:
(x, label) = train_pickle_reader('train.pickle')
ValueError: too many values to unpack (expected 2)
I found the description of the dataset I used:
Extracts trace groups from inkml files.
Converts extracted trace groups into images. Images are square shaped bitmaps with only black (value 0) and white (value 1) pixels. Black color denotes patterns (ROI).
Labels those images (according to inkml files).
Flattens images to one-dimensional vectors.
Converts labels to one-hot format.
Dumps training and testing sets separately into outputs folder.
Below there is code in python:
import tensorflow as tf
import pickle
def train_pickle_reader(filename):
with open(filename, 'rb') as f:
x = pickle.load(f)
# assuming x is already of the form (all_train_input, all_train_labels):
return x
def test_pickle_reader(filename):
with open(filename, 'rb') as f:
x = pickle.load(f)
# assuming x is already of the form (all_train_input, all_train_labels):
return x
# Function to create a weight neuron using a random number. Training will assign a real weight later
def weight_variable(shape, name):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial, name=name)
# Function to create a bias neuron. Bias of 0.1 will help to prevent any 1 neuron from being chosen too often
def biases_variable(shape, name):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial, name=name)
# Function to create a convolutional neuron. Convolutes input from 4d to 2d. This helps streamline inputs
def conv_2d(x, W, name):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME', name=name)
# Function to create a neuron to represent the max input. Helps to make the best prediction for what comes next
def max_pool(x, name):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name=name)
# A way to input images (as 784 element arrays of pixel values 0 - 1)
x_input = tf.placeholder(dtype=tf.float32, shape=[None, 784], name='x_input')
# A way to input labels to show model what the correct answer is during training
y_input = tf.placeholder(dtype=tf.float32, shape=[None, 10], name='y_input')
# First convolutional layer - reshape/resize images
# A weight variable that examines batches of 5x5 pixels, returns 32 features (1 feature per bit value in 32 bit float)
W_conv1 = weight_variable([5, 5, 1, 32], 'W_conv1')
# Bias variable to add to each of the 32 features
b_conv1 = biases_variable([32], 'b_conv1')
# Reshape each input image into a 28 x 28 x 1 pixel matrix
x_image = tf.reshape(x_input, [-1, 28, 28, 1], name='x_image')
# Flattens filter (W_conv1) to [5 * 5 * 1, 32], multiplies by [None, 28, 28, 1] to associate each 5x5 batch with the
# 32 features, and adds biases
h_conv1 = tf.nn.relu(conv_2d(x_image, W_conv1, name='conv1') + b_conv1, name='h_conv1')
# Takes windows of size 2x2 and computes a reduction on the output of h_conv1 (computes max, used for better prediction)
# Images are reduced to size 14 x 14 for analysis
h_pool1 = max_pool(h_conv1, name='h_pool1')
# Second convolutional layer, reshape/resize images
# Does mostly the same as above but converts each 32 unit output tensor from layer 1 to a 64 feature tensor
W_conv2 = weight_variable([5, 5, 32, 64], 'W_conv2')
b_conv2 = biases_variable([64], 'b_conv2')
h_conv2 = tf.nn.relu(conv_2d(h_pool1, W_conv2, name='conv2') + b_conv2, name='h_conv2')
# Images at this point are reduced to size 7 x 7 for analysis
h_pool2 = max_pool(h_conv2, name='h_pool2')
# First dense layer, performing calculation based on previous layer output
# Each image is 7 x 7 at the end of the previous section and outputs 64 features, we want 32 x 32 neurons = 1024
W_dense1 = weight_variable([7 * 7 * 64, 1024], name='W_dense1')
# bias variable added to each output feature
b_dense1 = biases_variable([1024], name='b_dense1')
# Flatten each of the images into size [None, 7 x 7 x 64]
h_pool_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64], name='h_pool_flat')
# Multiply weights by the outputs of the flatten neuron and add biases
h_dense1 = tf.nn.relu(tf.matmul(h_pool_flat, W_dense1, name='matmul_dense1') + b_dense1, name='h_dense1')
# Dropout layer prevents overfitting or recognizing patterns where none exist
# Depending on what value we enter into keep_prob, it will apply or not apply dropout layer
keep_prob = tf.placeholder(dtype=tf.float32, name='keep_prob')
# Dropout layer will be applied during training but not testing or predicting
h_drop1 = tf.nn.dropout(h_dense1, keep_prob, name='h_drop1')
# Readout layer used to format output
# Weight variable takes inputs from each of the 1024 neurons from before and outputs an array of 10 elements
W_readout1 = weight_variable([1024, 10], name='W_readout1')
# Apply bias to each of the 10 outputs
b_readout1 = biases_variable([10], name='b_readout1')
# Perform final calculation by multiplying each of the neurons from dropout layer by weights and adding biases
y_readout1 = tf.add(tf.matmul(h_drop1, W_readout1, name='matmul_readout1'), b_readout1, name='y_readout1')
# Softmax cross entropy loss function compares expected answers (labels) vs actual answers (logits)
cross_entropy_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_input, logits=y_readout1))
# Adam optimizer aims to minimize loss
train_step = tf.train.AdamOptimizer(0.0001).minimize(cross_entropy_loss)
# Compare actual vs expected outputs to see if highest number is at the same index, true if they match and false if not
correct_prediction = tf.equal(tf.argmax(y_input, 1), tf.argmax(y_readout1, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Used to save the graph and weights
saver = tf.train.Saver()
# Run in with statement so session only exists within it
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Save the graph shape and node names to pbtxt file
tf.train.write_graph(sess.graph_def, '.', 'advanced_mnist.pbtxt', False)
(x, label) = train_pickle_reader('train.pickle')
batch_size = 64 # the batch size you want to use
num_batches = len(x)//batch_size
# Train the model, running through data 20000 times in batches of 50
# Print out step # and accuracy every 100 steps and final accuracy at the end of training
# Train by running train_step and apply dropout by setting keep_prob to 0.5
for i in range(20000):
for j in range(num_batches):
x_batch = x[j * batch_size: (j + 1) * batch_size]
label_batch = label[j * batch_size: (j + 1)*batch_size]
train_step.run(feed_dict={x_input: x_batch, y_input: label_batch, keep_prob: 0.5})
# Save the session with graph shape and node weights
saver.save(sess, 'advanced_mnist.ckpt')
# Make a prediction
(x, labels) = test_pickle_reader('test.pickle')
print(sess.run(y_readout1, feed_dict={x_input: x, keep_prob: 1.0}))
In your code, after instantiating a tf.Session(), the line batch = mnist_data.train.next_batch(50) calls a built in function which returns a tuple of the kind (input, label). In order to feed the network with your data, here you need to define some function returning i.e. a numpy array having the input data and the associated label. For example, assuming you have a pickle file containing your training data, your code should look something like:
def pikle_reader(filename):
with open(filename, 'r') as f:
x = pickle.load(f)
# assuming x is already of the form (all_train_input, all_train_labels):
return x
[...]
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
[...]
# get your data:
(x, label) = pikle_reader(filename)
batch_size = 64 # the batch size you want to use
num_batches = len(x)//batch_size
for i in range(20000): # number of epochs
for j in range(num_batches):
x_batch = x[j*batch_size: (j+1)*batch_size]
label_batch = label[j* batch_size: (j+1)batch_size]
train_step.run(feed_dict={x_input: x_batch, y_input: label_batch, keep_prob: 0.5})
Here, feed_dict feeds the placeholders x_input with the values in x_batch and the placeholder y_input with label_batch. Then in the session the code will run the train_step operation.
Instead, when you want to make a prediction the code is basically the same:
(x, label) = pikle_reader(test_data_filename)
print(sess.run(y_readout1, feed_dict={x_input: x, keep_prob: 1.0}))
I am implementing an RNN and contrarily to the examples I have found which minimize only the cost for the output in the last step
x = tf.placeholder ("float", [features_dimension, None, n_timesteps])
y = tf.placeholder ("float", [labels_dimension, None, n_timesteps])
# Define weights
weights = {'out': tf.Variable (tf.random_normal ([N_HIDDEN, labels_dimension]))}
biases = {'out': tf.Variable (tf.random_normal ([labels_dimension]))}
def RNN (x, weights, biases):
# Prepare data shape to match `rnn` function requirements
# Current data input shape: (features_dimension, BATCH_SIZE, n_timesteps)
# Required shape: `n_timesteps` tensors list of shape (BATCH_SIZE, features_dimension)
# We make a division of the data to split it in individual vectors that
# will be fed for each timestep
# Permuting features_dimension and n_timesteps
# Shape will be (n_timesteps, BATCH_SIZE, features_dimension)
x = tf.transpose (x, [2, 1, 0])
# Reshaping to (BATCH_SIZE*n_timesteps, features_dimension) (we are removing the depth dimension with this)
x = tf.reshape(x, [BATCH_SIZE*n_timesteps, features_dimension])
# Split the previous 2D tensor to get a list of `n_timesteps` tensors of
# shape (batch_size, features_dimension).
x = tf.split (x, n_timesteps, 0)
# Define a lstm cell with tensorflow
lstm_cell = rnn.BasicLSTMCell (N_HIDDEN, forget_bias=1.0)
# Get lstm cell output
outputs, states = rnn.static_rnn (lstm_cell, x, dtype=tf.float32)
# Linear activation; outputs contains the array of outputs for all the
# timesteps
pred = tf.matmul (outputs, weights['out']) + biases['out']
However, the object outputs is a list of Tensor with n_timesteps elements, so the pred = tf.matmul (outputs, weights['out']) + biases['out'] throws the error
ValueError: Shape must be rank 2 but is rank 3 for 'MatMul' (op:
'MatMul') with input shapes: [100,128,16], [16,1].
. How can I do this multiplication?
The solution is to tf.stack the list of tensors into a 3d tensor and then use tf.map_fn to apply the multiplication operation on each 2d tensor along dimension 0:
# Transform the list into a 3D tensor with dimensions (n_timesteps, batch_size, N_HIDDEN)
outputs = tf.stack(outputs)
def pred_fn(current_output):
return tf.matmul(current_output, weights['out']) + biases['out']
# Use tf.map_fn to apply pred_fn to each tensor in outputs, along dimension 0 (timestep dimension)
pred = tf.map_fn(pred_fn, outputs)
I find two kinds of implementations of RNN in tensorflow.
The first implementations is this (from line 124 to 129). It uses a loop to define each step of input in RNN.
with tf.variable_scope("RNN"):
for time_step in range(num_steps):
if time_step > 0: tf.get_variable_scope().reuse_variables()
(cell_output, state) = cell(inputs[:, time_step, :], state)
outputs.append(cell_output)
states.append(state)
The second implementation is this (from line 51 to 70). It doesn't use any loop to define each step of input in RNN.
def RNN(_X, _istate, _weights, _biases):
# input shape: (batch_size, n_steps, n_input)
_X = tf.transpose(_X, [1, 0, 2]) # permute n_steps and batch_size
# Reshape to prepare input to hidden activation
_X = tf.reshape(_X, [-1, n_input]) # (n_steps*batch_size, n_input)
# Linear activation
_X = tf.matmul(_X, _weights['hidden']) + _biases['hidden']
# Define a lstm cell with tensorflow
lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0)
# Split data because rnn cell needs a list of inputs for the RNN inner loop
_X = tf.split(0, n_steps, _X) # n_steps * (batch_size, n_hidden)
# Get lstm cell output
outputs, states = rnn.rnn(lstm_cell, _X, initial_state=_istate)
# Linear activation
# Get inner loop last output
return tf.matmul(outputs[-1], _weights['out']) + _biases['out']
In the first implementation, I find there is no weight matrix between input unit to hidden unit, only define weight matrix between hidden unit to out put unit (from line 132 to 133)..
output = tf.reshape(tf.concat(1, outputs), [-1, size])
softmax_w = tf.get_variable("softmax_w", [size, vocab_size])
softmax_b = tf.get_variable("softmax_b", [vocab_size])
logits = tf.matmul(output, softmax_w) + softmax_b
But in the second implementation, both of the weight matrices are defined (from line 42 to 47).
weights = {
'hidden': tf.Variable(tf.random_normal([n_input, n_hidden])), # Hidden layer weights
'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))
}
biases = {
'hidden': tf.Variable(tf.random_normal([n_hidden])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
I wonder why?
The difference I noticed is that the code in the second implementation uses tf.nn.rnn which takes list of inputs for each time step and generated the list of outputs for each time step.
(Inputs: A length T list of inputs, each a tensor of shape
[batch_size, input_size].)
So, if you check the code in the second implementation on line 62 the input data is shaped into n_steps * (batch_size, n_hidden)
# Split data because rnn cell needs a list of inputs for the RNN inner loop
_X = tf.split(0, n_steps, _X) # n_steps * (batch_size, n_hidden)
In the 1st implementation they are looping through the n_time_steps and providing the input and get the corresponding output and storing in the outputs list.
Code snippet from line 113 to 117
outputs = []
state = self._initial_state
with tf.variable_scope("RNN"):
for time_step in range(num_steps):
if time_step > 0: tf.get_variable_scope().reuse_variables()
(cell_output, state) = cell(inputs[:, time_step, :], state)
outputs.append(cell_output)
Coming to your second question:
If you carefully notice the way the inputs are being fed to the RNN in both the implementations.
In the first implementation the inputs are already of shape batch_size x num_steps (here num_steps is hidden size):
self._input_data = tf.placeholder(tf.int32, [batch_size, num_steps])
Whereas in the second implementation the initial inputs are of shape (batch_size x n_steps x n_input). So a weight matrix is required to transform to the shape (n_steps x batch_size x hidden_size):
# Input shape: (batch_size, n_steps, n_input)
_X = tf.transpose(_X, [1, 0, 2]) # Permute n_steps and batch_size
# Reshape to prepare input to hidden activation
_X = tf.reshape(_X, [-1, n_input]) # (n_steps*batch_size, n_input)
# Linear activation
_X = tf.matmul(_X, _weights['hidden']) + _biases['hidden']
# Split data because rnn cell needs a list of inputs for the RNN inner loop
_X = tf.split(0, n_steps, _X) # n_steps * (batch_size, n_hidden)
I hope this is helpful...