I am trying to use LSTM with inputs with different time steps (different number of frames). The input to the rnn.static_rnn should be a sequence of tf (not a tf!). So, I should convert my input to sequence. I tried to use tf.unstack and tf.split, but both of them need to know exact size of inputs, while one dimension of my inputs (time steps) is changing by different inputs. following is part of my code:
n_input = 256*256 # data input (img shape: 256*256)
n_steps = None # timesteps
batch_size = 1
# tf Graph input
x = tf.placeholder("float", [ batch_size , n_input,n_steps])
y = tf.placeholder("float", [batch_size, n_classes])
# Permuting batch_size and n_steps
x1 = tf.transpose(x, [2, 1, 0])
x1 = tf.transpose(x1, [0, 2, 1])
x3=tf.unstack(x1,axis=0)
#or x3 = tf.split(x2, ?, 0)
# Define a lstm cell with tensorflow
lstm_cell = rnn.BasicLSTMCell(num_units=n_hidden, forget_bias=1.0)
# Get lstm cell output
outputs, states = rnn.static_rnn(lstm_cell, x3, dtype=tf.float32,sequence_length=None)
I got following error when I am using tf.unstack:
ValueError: Cannot infer num from shape (?, 1, 65536)
Also, there are some discussions here and here, but none of them were useful for me. Any help is appreciated.
As explained in here, tf.unstack does not work if the argument is unspecified and non-inferrable.
In your code, after transpositions, x1 has the shape of [ n_steps, batch_size, n_input] and its value at axis=0 is set to None.
Related
I have an input that is a time series of 5 dimensions:
a = [[8,3],[2] , [4,5],[1], [9,1],[2]...] #total 100 timestamps. For each element, dims 0,1 are numerical data and dim 2 is a numerical encoding of a category. This is per sample, 3200 samples
The category has 3 possible values (0,1,2)
I want to build a NN such that the last dimension (the category) will go through an embedding layer with output size 8, and then will be concatenated back to the first two dims (the numerical data).
So, this will be something like:
input1 = keras.layers.Input(shape=(2,)) #the numerical features
input2 = keras.layers.Input(shape=(1,)) #the encoding of the categories. this part will be embedded to 5 dims
x2 = Embedding(input_dim=1, output_dim = 8)(input2) #apply it to every timestamp and take only dim 3, so [2],[1], [2]
x = concatenate([input1,x2]) #will get 10 dims at each timepoint, still 100 timepoints
x = LSTM(units=24)(x) #the input has 10 dims/features at each timepoint, total 100 timepoints per sample
x = Dense(1, activation='sigmoid')(x)
model = Model(inputs=[input1, input2] , outputs=[x]) #input1 is 1D vec of the width 2 , input2 is 1D vec with the width 1 and it is going through the embedding
model.compile(
loss='binary_crossentropy',
optimizer='adam',
metrics=['acc']
)
How can I do it? (preferably in keras)?
My problem is how to apply the embedding to every time point?
Meaning, if I have 1000 timepoints with 3 dims each, I need to convert it to 1000 timepoints with 8 dims each (The emebedding layer should transform input2 from (1000X1) to (1000X8)
There are a couple of issues you are having here.
First let me give you a working example and explain along the way how to solve your issues.
Imports and Data Generation
import tensorflow as tf
import numpy as np
from tensorflow.keras import layers
from tensorflow.keras.models import Model
num_timesteps = 100
max_features_values = [100, 100, 3]
num_observations = 2
input_list = [[[np.random.randint(0, v) for _ in range(num_timesteps)]
for v in max_features_values]
for _ in range(num_observations)]
input_arr = np.array(input_list) # shape (2, 3, 100)
In order to use an embedding we need to the voc_size as input_dimension, as stated in the LSTM documentation.
Embedding and Concatenation
voc_size = len(np.unique(input_arr[:, 2, :])) + 1 # 4
Now we need to create the inputs. Inputs should be of size [None, 2, num_timesteps] and [None, 1, num_timesteps] where the first dimension is the flexible and will be filled with the number of observations we are passing in. Let's use the embedding right after that using the previously calculated voc_size.
inp1 = layers.Input(shape=(2, num_timesteps)) # TensorShape([None, 2, 100])
inp2 = layers.Input(shape=(1, num_timesteps)) # TensorShape([None, 1, 100])
x2 = layers.Embedding(input_dim=voc_size, output_dim=8)(inp2) # TensorShape([None, 1, 100, 8])
x2_reshaped = tf.transpose(tf.squeeze(x2, axis=1), [0, 2, 1]) # TensorShape([None, 8, 100])
This cannot be easily concatenated since all dimensions must match except for the one along the concatenation axis. But the shapes are not matching unfortunately. Therefore we reshape x2. We do so by removing the first dimension and then transposing.
Now we can concatenate without any issue and everything works in a straight forward fashion:
x = layers.concatenate([inp1, x2_reshaped], axis=1)
x = layers.LSTM(32)(x)
x = layers.Dense(1, activation='sigmoid')(x)
model = Model(inputs=[inp1, inp2], outputs=[x])
Check on Dummy Example
inp1_np = input_arr[:, :2, :]
inp2_np = input_arr[:, 2:, :]
model.predict([inp1_np, inp2_np])
# Output
# array([[0.544262 ],
# [0.6157502]], dtype=float32)
#This outputs values between 0 and 1 just as expected.
In case you are not looking for Embeddings the way it's usually used in Keras (positive integers mapping to dense vectors). You might be looking for some sort of unprojection or basis expansion, in which 3 dimensions get mapped (embedded) to 8 and concatenating the result. This can be done using the kernel trick or other methods, but also happens implicitly in neural networks with non-linear applications.
As such, you can do something like this, following a similar format to pythonic833 because it was good (but with timestamps in the middle per the Keras LSTM documentation asking for [batch, timesteps, feature]):
Input generation
import tensorflow as tf
import numpy as np
from tensorflow.keras import layers
from tensorflow.keras.models import Model
num_timesteps = 100
num_features = 5
num_observations = 2
input_list = [[[np.random.randint(1, 100) for _ in range(num_features)]
for _ in range(num_timesteps)]
for _ in range(num_observations)]
input_arr = np.array(input_list) # shape (2, 100, 5)
Model construction
Then you can process the inputs:
input1 = layers.Input(shape=(num_timesteps, 2,))
input2 = layers.Input(shape=(num_timesteps, 3))
x2 = layers.Dense(8, activation='relu')(input2)
x = layers.concatenate([input1,x2], axis=2) # This produces tensors of shape (None, 100, 10)
x = layers.LSTM(units=24)(x)
x = layers.Dense(1, activation='sigmoid')(x)
model = Model(inputs=[input1, input2] , outputs=[x])
model.compile(
loss='binary_crossentropy',
optimizer='adam',
metrics=['acc']
)
Results
inp1_np = input_arr[:, :, :2]
inp2_np = input_arr[:, :, 2:]
model.predict([inp1_np, inp2_np])
which produces
array([[0.44117224],
[0.23611131]], dtype=float32)
Other explanations about basis expansion to check out:
https://stats.stackexchange.com/questions/527258/embedding-data-into-a-larger-dimension-space
https://www.reddit.com/r/MachineLearning/comments/2ffejw/why_dont_researchers_use_the_kernel_method_in/
Lets say that my input data, x, has the shape (2000, 2) where 2000 is the number of samples and 2 is the number of features.
So for this input data, I can setup a place holder like this:
x = tf.placeholder(tf.float32, shape=[None, 2], name='features')
My question is, if I transpose my input data, x, so that the shape is now (2, 2000) where 2000 is still the number of samples, how would I change the "shape" parameter in tf.placeholder?
I have tried setting shape=[2, None], but I just get an error. Does the 1st element in the "shape" parameter always have to be "None"?
Here the error I get: "ValueError: The last dimension of the inputs to Dense should be defined. Found None."
import tensorflow as tf
# Binary Classifier Implementation
# Training data
x_train = np.transpose(X) #shape=(2, 2000)
y_train = np.hstack((np.zeros((1, 1000)),np.zeros((1, 1000)) + 1)) #shape=(1, 2000)
# Variables
x = tf.placeholder(tf.float32, shape=[2, None], name='features')
y_ = tf.placeholder(tf.int64, shape=[1, None], name='labels')
h1 = tf.layers.dense(inputs=x, units=50, activation=tf.nn.relu) #one hidden layer with 50 neurons
y = tf.layers.dense(inputs=h1, units=1, activation=tf.nn.sigmoid) #one output layer with 1 neuron
# Functions
#loss
cross_entropy = tf.losses.sigmoid_cross_entropy(multi_class_labels=y_, logits=y)
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(cross_entropy)
# Initializer
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(1000):
sess.run([cross_entropy], feed_dict={x: x_train, y_: y_train})
I think you might be having trouble with shape inconsistency, not with transposing or having an undefined dimension size somewhere other than the first dim.
The following things work fine for me:
import tensorflow as tf
x = tf.placeholder(tf.float32, shape=[None, 2])
x_t = tf.transpose(x)
y = tf.placeholder(tf.float32, shape=[2, None])
print(x_t.shape)
>> TensorShape([Dimension(2), Dimension(None)])
print(y.shape)
>> TensorShape([Dimension(2), Dimension(None)])
I'm assuming the first dimension might be your batch-size? Are you maybe not being consistent with this or feeding the wrong shaped data?
Or maybe you're trying to manually change the shape of the tensor? (If so then you don't need to. tf takes care of that as apparent in my example).
That's the most one can help without your giving an idea of the error you're getting, a code snippet, or anything more than a vague and general description, but I hope it helps nonetheless.
Good luck!
Update due to updated question:
The error you're getting is telling you exactly what the problem is. You can transpose whatever you want, but the dense layer specifically cannot accept a tensor for which the last dimension is None. This makes sense; you have cannot create a fully connected layer without knowing the sizes of the weight matrices for instance.
If you're transposing x, then you're saying you have 2000 features (and 2 data-points) and the NN needs to know this in order to create the parameters you're going to train.
If you still consider yourself to have 2 features and however-many examples, then you shouldn't be working with a shape of (2, 2000) in the first place!
Try the following:
# Training data
x_train = X #shape=(2000, 2)
y_train = np.hstack((np.zeros((1000, 1)),np.zeros((1000, 1)) + 1)) #shape=(2000, 1)
# Variables
x = tf.placeholder(tf.float32, shape=[None, 2], name='features')
y_ = tf.placeholder(tf.int64, shape=[None, 1], name='labels')
h1 = tf.layers.dense(inputs=x, units=50, activation=tf.nn.relu) #one hidden layer with 50 neurons
y = tf.layers.dense(inputs=h1, units=1, activation=tf.nn.sigmoid) #one output layer with 1 neuron
# Functions
#loss
cross_entropy = tf.losses.sigmoid_cross_entropy(multi_class_labels=y_, logits=y)
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(cross_entropy)
# Initializer
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(1000):
sess.run([cross_entropy], feed_dict={x: x_train, y_: y_train})
Hope this helps.
On a completely different and unrelated note: I hope you know what you're doing when embedding 2 features in a 50 dimensional space that way.
Here is what I have tried:
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None,n_outputs])
layers = [tf.contrib.rnn.LSTMCell(num_units=n_neurons,
activation=tf.nn.leaky_relu, use_peepholes = True)
for layer in range(n_layers)]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
tf.summary.histogram("outputs", rnn_outputs)
tf.summary.image("RNN",rnn_outputs)
I am getting the following error:
InvalidArgumentError: Tensor must be 4-D with last dim 1, 3, or 4, not [55413,4,100]
[[Node: RNN_1 = ImageSummary[T=DT_FLOAT, bad_color=Tensor<type: uint8 shape: [4] values: 255 0 0...>, max_images=3, _device="/job:localhost/replica:0/task:0/device:CPU:0"](RNN_1/tag, rnn/transpose_1)]]
Kindly, help me get the visualization of the rnn inside the LSTM model that I am trying to run. This will help in understanding what LSTM is doing more accurately.
You can plot each RNN output as an image with one axis being the time and the other axis being the output. Here is an small example:
import tensorflow as tf
import numpy as np
n_steps = 100
n_inputs = 10
n_neurons = 10
n_layers = 3
x = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
layers = [tf.contrib.rnn.LSTMCell(num_units=n_neurons,
activation=tf.nn.leaky_relu, use_peepholes=True)
for layer in range(n_layers)]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, x, dtype=tf.float32)
# Time steps in horizontal axis, outputs in vertical axis, add last dimension for channel
rnn_out_imgs = tf.transpose(rnn_outputs, (0, 2, 1))[..., tf.newaxis]
out_img_sum = tf.summary.image("RNN", rnn_out_imgs, max_outputs=10)
init_op = tf.global_variables_initializer()
with tf.Session() as sess, tf.summary.FileWriter('log') as fw:
sess.run(init_op)
fw.add_summary(sess.run(out_img_sum, feed_dict={x: np.random.rand(10, n_steps, n_inputs)}))
You would get a visualization that could look like this:
Here the brighter pixels would represent a stronger activation, so even if it is hard to tell what exactly is causing what you can at least see if any meaningful patterns arise.
Your RNN output has the wrong shape for tf.summary.image. The tensor should be four-dimensional with the dimensions' sizes given by [batch_size, height, width, channels].
In your code, you're calling tf.summary.image with rnn_outputs, which has shape [55413, 4, 100]. Assuming your images are 55413-by-100 pixels in size and that each pixel contains 4 channels (RGBA), I'd use tf.reshape to reshape rnn_outputs to [1, 55413, 100, 4]. Then you should be able to call tf.summary.image without error.
I don't think I can help you visualize the RNN's operation, but when I was learning about RNNs and LSTMs, I found this article very helpful.
I'm building a LSTM RNN with Tensorflow that performs pixel-wise classification (or, maybe a better way to put it is, pixel-wise prediction?)
Bear with me as I explain the title.
The network looks like the following drawing...
The idea goes like this... an input image of size (200,200) is the input into a LSTM RNN of size (200,200,200). Each sequence output from the LSTM tensor vector (the pink boxes in the LSTM RNN) is fed into a MLP, and then the MLP makes a single output prediction -- ergo pixel-wise prediction (you can see how one input pixel generates one output "pixel"
The code looks like this (not all of the code, just parts that are needed):
...
n_input_x = 200
n_input_y = 200
x = tf.placeholder("float", [None, n_input_x, n_input_y])
y = tf.placeholder("float", [None, n_input_x, n_input_y])
def RNN(x):
x = tf.transpose(x, [1, 0, 2])
x = tf.reshape(x, [-1, n_input_x])
x = tf.split(0, n_steps, x)
lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
output_matrix = []
for i in xrange(200):
temp_vector = []
for j in xrange(200):
lstm_vector = outputs[j]
pixel_pred = multilayer_perceptron(lstm_vector, mlp_weights, mlp_biases)
temp_vector.append(pixel_pred)
output_matrix.append(temp_vector)
print i
return output_matrix
temp = RNN(x)
pred = tf.placeholder(temp, [None, n_input_x, n_input_y])
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
...
I have confirmed that the output of RNN -- that is, what is stored in temp is a 200x200 array of <tf.Tensor 'Softmax_39999:0' shape=(?, 1) dtype=float32>
As you can see, I place temp in a tf.placeholder of the same shape (None for the batch size ... or do I need this?)... and the program just exits as if it completed running. Ideally what I want to see when I debug and print pred is something like <tf.Tensor shape=(200,200)>
When I debug, the first time I execute pred = tf.placeholder(temp, [None, n_input_x, n_input_y]) I get TypeError: TypeErro...32>]].",) and then it returns and I try again, and it says Exception AttributeError: "'NoneType' object has no attribute 'path'" in <function _remove at 0x7f1ab77c26e0> ignored
EDIT I also now realize that I need to place the lines
lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
Inside the first loop so that new 2D LSTM RNN are generated, however I'm getting an error about variable reusing ValueError: Variable RNN/BasicLSTMCell/Linear/Matrix does not exist, disallowed. Did you mean to set reuse=None in VarScope?
So in other words, is it isn't auto incrementing the RNN tensor name?
A more convenient way to report shapes is with tf.shape(). In your case:
size1 = tf.shape(temp)
sess = tf.Session()
size1_fetched = sess.run(size1, feed_dict = your_feed_dict)
That way, the size1_fetched is something like you would get from NumPy. Moreover, also your sizes for that particular feed_dict are given. For example, your [None, 200, 200] Tensor would be [64, 200, 200]
Another question: why do you have the placeholder in between your flow-graph? Will you later on feed pre-defined images feature-maps?
I'm trying to build a pixel-wise classification LSTM RNN using tensorflow. My model is displayed in the picture below. The problem I'm having is building a 3D LSTM RNN. The code that I have builds a 2D LSTM RNN, so I placed the code inside a loop, but now I get the following error:
ValueError: Variable RNN/BasicLSTMCell/Linear/Matrix does not exist, disallowed. Did you mean to set reuse=None in VarScope?
So here's the network:
The idea goes like this... an input image of size (200,200) is the input into a LSTM RNN of size (200,200,200). Each sequence output from the LSTM tensor vector (the pink boxes in the LSTM RNN) is fed into a MLP, and then the MLP makes a single output prediction -- ergo pixel-wise prediction (you can see how one input pixel generates one output "pixel"
So here's my code:
...
n_input_x = 200
n_input_y = 200
x = tf.placeholder("float", [None, n_input_x, n_input_y])
y = tf.placeholder("float", [None, n_input_x, n_input_y])
def RNN(x):
x = tf.transpose(x, [1, 0, 2])
x = tf.reshape(x, [-1, n_input_x])
x = tf.split(0, n_steps, x)
output_matrix = []
for i in xrange(200):
temp_vector = []
lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
for j in xrange(200):
lstm_vector = outputs[j]
pixel_pred = multilayer_perceptron(lstm_vector, mlp_weights, mlp_biases)
temp_vector.append(pixel_pred)
output_matrix.append(temp_vector)
print i
return output_matrix
temp = RNN(x)
pred = tf.placeholder(temp, [None, n_input_x, n_input_y])
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
...
You can see that I placed the call to RNN inside the first loop. In this way, I generate a new RNN every time. I know Tensorflow auto-increments other Tensors.
debugging I have
(Pdb) lstm_cell
<tensorflow.python.ops.rnn_cell.BasicLSTMCell object at 0x7f9d26956850>
and then for outputs I have a vector of 200 BasicLSTMCells
(Pdb) len(outputs)
200
...
<tf.Tensor 'RNN_2/BasicLSTMCell_199/mul_2:0' shape=(?, 200) dtype=float32>]
So ideally, I want the second call to RNN to generate LSTM tensors with indexes 200-399
I tried this, but it won't construct a RNN because the dimensions of 40000 and x (after the split) don't line up.
x = tf.reshape(x, [-1, n_input_x])
# Split to get a list of 'n_steps' tensors of shape (batch_size, n_hidden)
# This input shape is required by `rnn` function
x = tf.split(0, n_input_y, x)
lstm_cell = rnn_cell.BasicLSTMCell(40000, forget_bias=1.0, state_is_tuple=True)
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
output_matrix = []
for i in xrange(200):
temp_vector = []
for j in xrange(200):
lstm_vector = outputs[i*j]
So then I also tried to get rid of the split, but then it complains that it must be a list. So then I tried reshaping x = tf.reshape(x, [n_input_x * n_input_y]) but then it still says it must be a list