mxnet: how to debug models with mismatched shapes - python

I am trying to modify a model I found online (https://github.com/apache/incubator-mxnet/tree/master/example/multivariate_time_series) as I work to get to know mxnet. I am trying to build a model that takes both a CNN and RNN network in parallel and then uses the outputs of both to forecast a time series. However, I am running into this error
RuntimeError: simple_bind error. Arguments: data: (128, 96, 20)
softmax_label: (128, 20) Error in operator concat1: [15:44:09]
src/operator/nn/concat.cc:66: Check failed:
shape_assign(&(*in_shape)[i], dshape) Incompatible input shape:
expected [128,0], got [128,96,300]
This is the code, as I have tried to modify it:
def rnn_cnn_model(iter_train, q, filter_list, num_filter, dropout, seasonal_period, time_interval):
# Choose cells for recurrent layers: each cell will take the output of the previous cell in the list
rcells = [mx.rnn.GRUCell(num_hidden=args.recurrent_state_size)]
skiprcells = [mx.rnn.LSTMCell(num_hidden=args.recurrent_state_size)]
input_feature_shape = iter_train.provide_data[0][1]
X = mx.symbol.Variable(iter_train.provide_data[0].name)
Y = mx.sym.Variable(iter_train.provide_label[0].name)
# reshape data before applying convolutional layer (takes 4D shape incase you ever work with images)
rnn_input = mx.sym.reshape(data=X, shape=(0, q, -1))
###############
# RNN Component
###############
stacked_rnn_cells = mx.rnn.SequentialRNNCell()
for i, recurrent_cell in enumerate(rcells):
stacked_rnn_cells.add(recurrent_cell)
stacked_rnn_cells.add(mx.rnn.DropoutCell(dropout))
outputs, states = stacked_rnn_cells.unroll(length=q, inputs=rnn_input, merge_outputs=False)
rnn_features = outputs[-1] #only take value from final unrolled cell for use later
input_feature_shape = iter_train.provide_data[0][1]
X = mx.symbol.Variable(iter_train.provide_data[0].name)
Y = mx.sym.Variable(iter_train.provide_label[0].name)
# reshape data before applying convolutional layer (takes 4D shape incase you ever work with images)
conv_input = mx.sym.reshape(data=X, shape=(0, 1, q, -1))
###############
# CNN Component
###############
outputs = []
for i, filter_size in enumerate(filter_list):
# pad input array to ensure number output rows = number input rows after applying kernel
padi = mx.sym.pad(data=conv_input, mode="constant", constant_value=0,
pad_width=(0, 0, 0, 0, filter_size - 1, 0, 0, 0))
convi = mx.sym.Convolution(data=padi, kernel=(filter_size, input_feature_shape[2]), num_filter=num_filter)
acti = mx.sym.Activation(data=convi, act_type='relu')
trans = mx.sym.reshape(mx.sym.transpose(data=acti, axes=(0, 2, 1, 3)), shape=(0, 0, 0))
outputs.append(trans)
cnn_features = mx.sym.Concat(*outputs, dim=2)
cnn_reg_features = mx.sym.Dropout(cnn_features, p=dropout)
c_features = mx.sym.reshape(data = cnn_reg_features, shape = (-1))
print(type(c_features))
######################
# Prediction Component
######################
print(rnn_features.infer_shape())
neural_components = mx.sym.concat(*[rnn_features, c_features], dim=1)
neural_output = mx.sym.FullyConnected(data=neural_components, num_hidden=input_feature_shape[2])
model_output = neural_output
loss_grad = mx.sym.LinearRegressionOutput(data=model_output, label=Y)
return loss_grad, [v.name for v in iter_train.provide_data], [v.name for v in iter_train.provide_label]
and I believe the crash is happening on this line of code
neural_components = mx.sym.concat(*[rnn_features, c_features], dim=1)
Here is what I have tried in an effort to get my dimensions to match up:
c_features = mx.sym.reshape(data = cnn_reg_features, shape = (-1))
c_features = cnn_reg_features[-1]
c_features = cnn_reg_features[:, -1, :]
I also tried to look at the git issues and Google around, but all I see is advice to use infer_shape. I tried applying this to c_features, but the output was not clear to me
data: ()
gru_i2h_weight: ()
gru_i2h_bias: ()
Basically, I would like to know at each stage as this graph is built what the shape of the symbol is. I am used to this capability in Tensorflow, which makes it easier to build and debug graphs when one has gone astray in doing an incorrect reshape, or simply for getting the sense of how a model works by looking at its dimension. Is there no equivalent opportunity in mxnet?
Given that the data_iter is fed in when producing these symbols I would think the inferred shape should be available. Ultimately my questions are (1) how can I see that shape of a symbol when it uses the data in the iterator and should know all shapes? (2) general guidelines on debugging in this sort of situation?
Thank you.

Related

Looking for a clean way to add additional channels of zeros to an NxNxC tensor with an unknown batch size?

So currently I have this code which will pad a BxNxNxC tensor to a BxNxNx(C+P)tensor, where B is batch size, C is the number of channels, and P is the number of padding channels I want to add:
A = <some BxNxNxC tensor>
P = <some calculation>
padding_tensor = keras.layers.UpSampling3D(size=[1, 1, P])(tf.zeros_like(A)[:, :, :, 0:1])
# This is the BxNxNx(C+P) tensor
concat = keras.layers.Concatenate(axis=3)([A, padding_tensor])
The reason I do this in a round about way is because I cannot directly create a padding_tensor of the correct size, because it seems impossible to get the batch size to specify the shape.
I want clean way to do this because I am looking at the computation graphs of my Models and this adds a lot of bloat. If it is possible to sort of hide all of these operations into a single computation node I would be happy enough with that but would rather not have to use 3 operations for something as simple as padding.
I also suspect this will be kind of slow, but I don't know enough about tensorflow to really know.
this is my suggestion... I initialize a fake conv2d layer with zeros and make it not trainable, this will produce 0 output
batch, H, W, F, C, P = 32, 28, 28, 3, 5, 6
X = np.random.uniform(0,1, (batch,H,W,F))
inp = Input((H,W,F))
x_c = Conv2D(C,3, padding='same')(inp) # BxNxNxC
x_p = Conv2D(P,3, padding='same', kernel_initializer='zeros', name='zeros')(inp) # BxNxNxP
concat = Concatenate()([x_c,x_p]) # BxNxNx(C+P)
model = Model(inp, concat)
model.get_layer('zeros').trainable = False # important
model.summary()
# check if zeros
model.predict(X)[:,:,:,-P:].sum() # 0

How does tf.dataset interact with keras.conv1D?

I'm using tf 1.15, i'm trying to make a regression task using a signal.
First of all i load my signals into the pipeline, i have several files, here i simulate the loading using a np.zeros to make the code usable by you.
Every file has this shape (?, 75000, 3), where ? is a random number of elements, 75000 is the number of samples in each element and 3 is the number of signals.
Using the tf.data i unpack them and i get a dataset who output signals with this shape (75000,), and i use them in my keras model.
Everything should be fine until i create the keras model, i copied my input pipeline because during my tests i got different errors using a generic tf.data.dataset or using the dataset built in this way.
import numpy as np
import tensorflow as tf
# called in the dataset pipeline
def my_func(x):
p = np.zeros([86, 75000, 3])
x = p[:,:,0]
y = p[:, :, 1]
z = p[:, :, 2]
return x, y, z
# called in the dataset pipeline
def load_sign(path):
func = tf.compat.v1.numpy_function(my_func, [path], [tf.float64, tf.float64, tf.float64])
return func
# Dataset pipeline
s = [1, 2] # here i have the file paths, i simulate it with numbers
AUTOTUNE = tf.data.experimental.AUTOTUNE
ds = tf.data.Dataset.from_tensor_slices(s)
# ds = ds.map(load_sign, num_parallel_calls=AUTOTUNE)
ds = ds.map(load_sign, num_parallel_calls=AUTOTUNE).unbatch()
itera = tf.data.make_one_shot_iterator(ds)
ABP, ECG, PLETH = itera.get_next()
# Until there everything should be fine
# Here i create my convolutional network
signal = tf.keras.layers.Input(shape=(None,75000), dtype='float32')
x = tf.compat.v1.keras.layers.Conv1D(64, (1), strides=1, padding='same')(signal)
x = tf.keras.layers.Dense(75000)(x)
model = tf.keras.Model(inputs=signal, outputs=x, name='resnet18')
# And finally i try to insert my signal into model
logits = model(PLETH)
I get this error:
ValueError: Input 0 of layer conv1d is incompatible with the layer: its rank is undefined, but the layer requires a defined rank.
Why? And how can i make it works?
Also the input size of my net should be this one according the documentation:
3D tensor with shape: (batch_size, steps, input_dim)
What is the steps? In my case i assume it should be (batch_size, 1, 75000), right?

Data Structure Discrepancy in Tensorflow/TFLearn

I have two datasets, which is like:
input:
array([[[ 0.99309823],
...
[ 0. ]]])
shape : (1, 2501)
output:
array([[0, 0, 0, ..., 0, 0, 1],
...,
[0, 0, 0, ..., 0, 0, 0]])
shape : (2501, 9)
And I processed it with TFLearn; as
input_layer = tflearn.input_data(shape=[None,2501])
hidden1 = tflearn.fully_connected(input_layer,1205,activation='ReLU', regularizer='L2', weight_decay=0.001)
dropout1 = tflearn.dropout(hidden1,0.8)
hidden2 = tflearn.fully_connected(dropout1,1205,activation='ReLU', regularizer='L2', weight_decay=0.001)
dropout2 = tflearn.dropout(hidden2,0.8)
softmax = tflearn.fully_connected(dropout2,9,activation='softmax')
# Regression with SGD
sgd = tflearn.SGD(learning_rate=0.1,lr_decay=0.96, decay_step=1000)
top_k=tflearn.metrics.Top_k(3)
net = tflearn.regression(softmax,optimizer=sgd,metric=top_k,loss='categorical_crossentropy')
model = tflearn.DNN(net)
model.fit(input,output,n_epoch=10,show_metric=True, run_id='dense_model')
It works but not the way that I want. It's a DNN model. I want that when I enter 0.95, model must give me corresponding prediction for example [0,0,0,0,0,0,0,0,1]. However, when I want to enter 0.95, it says that,
ValueError: Cannot feed value of shape (1,) for Tensor 'InputData/X:0', which has shape '(?, 2501)'
When I tried to understand I realise that I need (1,2501) shaped data to predict for my wrong based model.
What i want is for every element in input, predict corresponding element in output. As you can see, in the instance dataset,
for [0.99309823], corresponding output is [0,0,0,0,0,0,0,0,1]. I want tflearn to train itself like this.
I may have wrong structured data, or model(probably dataset), I explained all the things, I need help I'm really out of my mind.
Your input data should be Nx1 (N = number of samples) dimensional to archive this transformation ([0.99309823] --> [0,0,0,0,0,0,0,0,1] ). According to your input data shape, it looks more likely including 1 sample with 2501 dimensions.
ValueError: Cannot feed value of shape (1,) for Tensor 'InputData/X:0', which has shape '(?, 2501)' This error means that tensorflow expecting you to provide a vector with shape (,2501), but you are feeding the network with a vector with shape (1,).
Example modified code with dummy data:
import numpy as np
import tflearn
#creating dummy data
input_data = np.random.rand(1, 2501)
input_data = np.transpose(input_data) # now shape is (2501,1)
output_data = np.random.randint(8, size=2501)
n_values = 9
output_data = np.eye(n_values)[output_data]
# checking the shapes
print input_data.shape #(2501,1)
print output_data.shape #(2501,9)
input_layer = tflearn.input_data(shape=[None,1]) # now network is expecting ( Nx1 )
hidden1 = tflearn.fully_connected(input_layer,1205,activation='ReLU', regularizer='L2', weight_decay=0.001)
dropout1 = tflearn.dropout(hidden1,0.8)
hidden2 = tflearn.fully_connected(dropout1,1205,activation='ReLU', regularizer='L2', weight_decay=0.001)
dropout2 = tflearn.dropout(hidden2,0.8)
softmax = tflearn.fully_connected(dropout2,9,activation='softmax')
# Regression with SGD
sgd = tflearn.SGD(learning_rate=0.1,lr_decay=0.96, decay_step=1000)
top_k=tflearn.metrics.Top_k(3)
net = tflearn.regression(softmax,optimizer=sgd,metric=top_k,loss='categorical_crossentropy')
model = tflearn.DNN(net)
model.fit(input_data, output_data, n_epoch=10,show_metric=True, run_id='dense_model')
Also my friend warned me about same thing as rcmalli. He says
reshape:
input = tf.reshape(input, (2501,1))
change
input_layer = tflearn.input_data(shape=[None,2501])
to
input_layer = tflearn.input_data(shape=[None, 1])
Variable dimension must be "None". In your wrong case, 2501 is the magnitude(or something else, I translated from another lang., but you got it) of your dataset. 1 is constant input magnitude.

How to feed back RNN output to input in tensorflow

In case where suppose I have a trained RNN (e.g. language model), and I want to see what it would generate on its own, how should I feed its output back to its input?
I read the following related questions:
TensorFlow using LSTMs for generating text
TensorFlow LSTM Generative Model
Theoretically it is clear to me, that in tensorflow we use truncated backpropagation, so we have to define the max step which we would like to "trace". Also we reserve a dimension for batches, therefore if I'd like to train a sine wave, I have to feed [None, num_step, 1] inputs.
The following code works:
tf.reset_default_graph()
n_samples=100
state_size=5
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(state_size, forget_bias=1.)
def_x = np.sin(np.linspace(0, 10, n_samples))[None, :, None]
zero_x = np.zeros(n_samples)[None, :, None]
X = tf.placeholder_with_default(zero_x, [None, n_samples, 1])
output, last_states = tf.nn.dynamic_rnn(inputs=X, cell=lstm_cell, dtype=tf.float64)
pred = tf.contrib.layers.fully_connected(output, 1, activation_fn=tf.tanh)
Y = np.roll(def_x, 1)
loss = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)
opt = tf.train.AdamOptimizer().minimize(loss)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
# Initial state run
plt.show(plt.plot(output.eval()[0]))
plt.plot(def_x.squeeze())
plt.show(plt.plot(pred.eval().squeeze()))
steps = 1001
for i in range(steps):
p, l, _= sess.run([pred, loss, opt])
The state size of the LSTM can be varied, also I experimented with feeding sine wave into the network and zeros, and in both cases it converged in ~500 iterations. So far I have understood that in this case the graph consists n_samples number of LSTM cells sharing their parameters, and it is only up to me that I feed input to them as a time series. However when generating samples the network is explicitly depending on its previous output - meaning that I cannot feed the unrolled model at once. I tried to compute the state and output at every step:
with tf.variable_scope('sine', reuse=True):
X_test = tf.placeholder(tf.float64)
X_reshaped = tf.reshape(X_test, [1, -1, 1])
output, last_states = tf.nn.dynamic_rnn(lstm_cell, X_reshaped, dtype=tf.float64)
pred = tf.contrib.layers.fully_connected(output, 1, activation_fn=tf.tanh)
test_vals = [0.]
for i in range(1000):
val = pred.eval({X_test:np.array(test_vals)[None, :, None]})
test_vals.append(val)
However in this model it seems that there is no continuity between the LSTM cells. What is going on here?
Do I have to initialize a zero array with i.e. 100 time steps, and assign each run's result into the array? Like feeding the network with this:
run 0: input_feed = [0, 0, 0 ... 0]; res1 = result
run 1: input_feed = [res1, 0, 0 ... 0]; res2 = result
run 1: input_feed = [res1, res2, 0 ... 0]; res3 = result
etc...
What to do if I want to use this trained network to use its own output as its input in the following time step?
If I understood you correctly, you want to find a way to feed the output of time step t as input to time step t+1, right? To do so, there is a relatively easy work around that you can use at test time:
Make sure your input placeholders can accept a dynamic sequence length, i.e. the size of the time dimension is None.
Make sure you are using tf.nn.dynamic_rnn (which you do in the posted example).
Pass the initial state into dynamic_rnn.
Then, at test time, you can loop through your sequence and feed each time step individually (i.e. max sequence length is 1). Additionally, you just have to carry over the internal state of the RNN. See pseudo code below (the variable names refer to your code snippet).
I.e., change the definition of the model to something like this:
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(state_size, forget_bias=1.)
X = tf.placeholder_with_default(zero_x, [None, None, 1]) # [batch_size, seq_length, dimension of input]
batch_size = tf.shape(self.input_)[0]
initial_state = lstm_cell.zero_state(batch_size, dtype=tf.float32)
def_x = np.sin(np.linspace(0, 10, n_samples))[None, :, None]
zero_x = np.zeros(n_samples)[None, :, None]
output, last_states = tf.nn.dynamic_rnn(inputs=X, cell=lstm_cell, dtype=tf.float64,
initial_state=initial_state)
pred = tf.contrib.layers.fully_connected(output, 1, activation_fn=tf.tanh)
Then you can perform inference like so:
fetches = {'final_state': last_state,
'prediction': pred}
toy_initial_input = np.array([[[1]]]) # put suitable data here
seq_length = 20 # put whatever is reasonable here for you
# get the output for the first time step
feed_dict = {X: toy_initial_input}
eval_out = sess.run(fetches, feed_dict)
outputs = [eval_out['prediction']]
next_state = eval_out['final_state']
for i in range(1, seq_length):
feed_dict = {X: outputs[-1],
initial_state: next_state}
eval_out = sess.run(fetches, feed_dict)
outputs.append(eval_out['prediction'])
next_state = eval_out['final_state']
# outputs now contains the sequence you want
Note that this can also work for batches, however it can be a bit more complicated if you sequences of different lengths in the same batch.
If you want to perform this kind of prediction not only at test time, but also at training time, it is also possible to do, but a bit more complicated to implement.
You can use its own output (last state) as the next-step input (initial state).
One way to do this is to:
use zero-initialized variables as the input state at every time step
each time you completed a truncated sequence and got some output state, update the state variables with this output state you just got.
The second can be done by either:
fetching the states to python and feeding them back next time, as done in the ptb example in tensorflow/models
build an update op in the graph and add a dependency, as done in the ptb example in tensorpack.
I know I'm a bit late to the party but I think this gist could be useful:
https://gist.github.com/CharlieCodex/f494b27698157ec9a802bc231d8dcf31
It lets you autofeed the input through a filter and back into the network as input. To make shapes match up processing can be set as a tf.layers.Dense layer.
Please ask any questions!
Edit:
In your particular case, create a lambda which performs the processing of the dynamic_rnn outputs into your character vector space. Ex:
# if you have:
W = tf.Variable( ... )
B = tf.Variable( ... )
Yo, Ho = tf.nn.dynamic_rnn( cell , inputs , state )
logits = tf.matmul(W, Yo) + B
...
# use self_feeding_rnn as
process_yo = lambda Yo: tf.matmul(W, Yo) + B
Yo, Ho = self_feeding_rnn( cell, seed, initial_state, processing=process_yo)

How can I visualize the weights(variables) in cnn in Tensorflow?

After training the cnn model, I want to visualize the weight or print out the weights, what can I do?
I cannot even print out the variables after training.
Thank you!
To visualize the weights, you can use a tf.image_summary() op to transform a convolutional filter (or a slice of a filter) into a summary proto, write them to a log using a tf.train.SummaryWriter, and visualize the log using TensorBoard.
Let's say you have the following (simplified) program:
filter = tf.Variable(tf.truncated_normal([8, 8, 3]))
images = tf.placeholder(tf.float32, shape=[None, 28, 28])
conv = tf.nn.conv2d(images, filter, strides=[1, 1, 1, 1], padding="SAME")
# More ops...
loss = ...
optimizer = tf.GradientDescentOptimizer(0.01)
train_op = optimizer.minimize(loss)
filter_summary = tf.image_summary(filter)
sess = tf.Session()
summary_writer = tf.train.SummaryWriter('/tmp/logs', sess.graph_def)
for i in range(10000):
sess.run(train_op)
if i % 10 == 0:
# Log a summary every 10 steps.
summary_writer.add_summary(filter_summary, i)
After doing this, you can start TensorBoard to visualize the logs in /tmp/logs, and you will be able to see a visualization of the filter.
Note that this trick visualizes depth-3 filters as RGB images (to match the channels of the input image). If you have deeper filters, or they don't make sense to interpret as color channels, you can use the tf.split() op to split the filter on the depth dimension, and generate one image summary per depth.
Like #mrry said, you can use tf.image_summary. For example, for cifar10_train.py, you can put this code somewhere under def train(). Note how you access a var under scope 'conv1'
# Visualize conv1 features
with tf.variable_scope('conv1') as scope_conv:
weights = tf.get_variable('weights')
# scale weights to [0 255] and convert to uint8 (maybe change scaling?)
x_min = tf.reduce_min(weights)
x_max = tf.reduce_max(weights)
weights_0_to_1 = (weights - x_min) / (x_max - x_min)
weights_0_to_255_uint8 = tf.image.convert_image_dtype (weights_0_to_1, dtype=tf.uint8)
# to tf.image_summary format [batch_size, height, width, channels]
weights_transposed = tf.transpose (weights_0_to_255_uint8, [3, 0, 1, 2])
# this will display random 3 filters from the 64 in conv1
tf.image_summary('conv1/filters', weights_transposed, max_images=3)
If you want to visualize all your conv1 filters in one nice grid, you would have to organize them into a grid yourself. I did that today, so now I'd like to share a gist for visualizing conv1 as a grid
You can extract the values as numpy arrays the following way:
with tf.variable_scope('conv1', reuse=True) as scope_conv:
W_conv1 = tf.get_variable('weights', shape=[5, 5, 1, 32])
weights = W_conv1.eval()
with open("conv1.weights.npz", "w") as outfile:
np.save(outfile, weights)
Note that you have to adjust the scope ('conv1' in my case) and the variable name ('weights' in my case).
Then it boils down on visualizing numpy arrays. One example how to visualize numpy arrays is
#!/usr/bin/env python
"""Visualize numpy arrays."""
import numpy as np
import scipy.misc
arr = np.load('conv1.weights.npb')
# Get each 5x5 filter from the 5x5x1x32 array
for filter_ in range(arr.shape[3]):
# Get the 5x5x1 filter:
extracted_filter = arr[:, :, :, filter_]
# Get rid of the last dimension (hence get 5x5):
extracted_filter = np.squeeze(extracted_filter)
# display the filter (might be very small - you can resize the window)
scipy.misc.imshow(extracted_filter)
Using the tensorflow 2 API, There are several options:
Weights extracted using the get_weights() function.
weights_n = model.layers[n].get_weights()[0]
Bias extracted using the numpy() convert function.
bias_n = model.layers[n].bias.numpy()

Categories

Resources