Tensorflow: Feeding placeholder from variable - python

I'm feeding to tensorflow computation(train) graph using input queue and
tf.train.batch function that prepares huge tensor with data.
I have another queue with test data I would like to feed to graph every 50th step.
Question
Given the form of the input (tensors) do I have to define separate test graph for test data computation or I can somehow reuse train grap?
# Prepare data
batch = tf.train.batch([train_image, train_label], batch_size=200)
batchT = tf.train.batch([test_image, test_label], batch_size=200)
x = tf.reshape(batch[0], [-1, IMG_SIZE, IMG_SIZE, 3])
y_ = batch[1]
xT = tf.reshape(batchT[0], [-1, IMG_SIZE, IMG_SIZE, 3])
y_T = batchT[1]
# Graph definition
train_step = ... # train_step = g(x)
# Session
sess = tf.Session()
sess.run(tf.initialize_all_variables())
for i in range(1000):
if i%50 == 0:
# here i would like reuse train graph but with tensor x replaced by x_t
# train_accuracy = ?
# print("step %d, training accuracy %g"%(i, train_accuracy))
train_step.run(session=sess)
I would use placeholders but I can't feed tf.placeholder with tf.Tensors and this is the thing I'm getting from queues.
How is it supposed to be done?
I'm really just starting.

Take a look at how this is done in the MNIST example: You need to use a placeholder with an initializer of the none-tensor form of your data (like filenames, or CSV) and then inside the graph, use the slice_input_producer -> deocde_jpeg (or whatever...) -> tf.train.batch() to create batches and feed those to the computation graph.
So your graph looks like something like:
Placeholder initialized with big filenames list/CSV/range
tf.slice_input_producer
tf.image.decode_jpeg or tf.py_func - loading of the actual data
tf.train.batch - create mini batches for training
feed to your model

Related

How to generate predictions from new data using trained tensorflow network?

I want to train Googles VGGish network (Hershey et al 2017) from scratch to predict classes specific to my own audio files.
For this I am using the vggish_train_demo.py script available on their github repo which uses tensorflow. I've been able to modify the script to extract melspec features from my own audio by changing the _get_examples_batch() function, and, then train the model on the output of this function. This runs to completetion and prints the loss at each epoch.
However, I've been unable to figure out how to get this trained model to generate predictions from new data. Can this be done with changes to the vggish_train_demo.py script?
For anyone who stumbles across this in the future, I wrote this script which does the job. You must save logmel specs for train and test data in the arrays: X_train, y_train, X_test, y_test. The X_train/test are arrays of the (n, 96,64) features and the y_train/test are arrays of shape (n, _NUM_CLASSES) for two classes, where n = the number of 0.96s audio segments and _NUM_CLASSES = the number of classes used.
See the function definition statement for more info and the vggish github in my original post:
### Run the network and save the predictions and accuracy at each epoch
### Train NN, output results
r"""This uses the VGGish model definition within a larger model which adds two
layers on top, and then trains this larger model.
We input log-mel spectrograms (X_train) calculated above with associated labels
(y_train), and feed the batches into the model. Once the model is trained, it
is then executed on the test log-mel spectrograms (X_test), and the accuracy is
ouput, alongside a .csv file with the predictions for each 0.96s chunk and their
true class."""
def main(X):
with tf.Graph().as_default(), tf.Session() as sess:
# Define VGGish.
embeddings = vggish_slim.define_vggish_slim(training=FLAGS.train_vggish)
# Define a shallow classification model and associated training ops on top
# of VGGish.
with tf.variable_scope('mymodel'):
# Add a fully connected layer with 100 units. Add an activation function
# to the embeddings since they are pre-activation.
num_units = 100
fc = slim.fully_connected(tf.nn.relu(embeddings), num_units)
# Add a classifier layer at the end, consisting of parallel logistic
# classifiers, one per class. This allows for multi-class tasks.
logits = slim.fully_connected(
fc, _NUM_CLASSES, activation_fn=None, scope='logits')
tf.sigmoid(logits, name='prediction')
linear_out= slim.fully_connected(
fc, _NUM_CLASSES, activation_fn=None, scope='linear_out')
logits = tf.sigmoid(linear_out, name='logits')
# Add training ops.
with tf.variable_scope('train'):
global_step = tf.train.create_global_step()
# Labels are assumed to be fed as a batch multi-hot vectors, with
# a 1 in the position of each positive class label, and 0 elsewhere.
labels_input = tf.placeholder(
tf.float32, shape=(None, _NUM_CLASSES), name='labels')
# Cross-entropy label loss.
xent = tf.nn.sigmoid_cross_entropy_with_logits(
logits=logits, labels=labels_input, name='xent')
loss = tf.reduce_mean(xent, name='loss_op')
tf.summary.scalar('loss', loss)
# We use the same optimizer and hyperparameters as used to train VGGish.
optimizer = tf.train.AdamOptimizer(
learning_rate=vggish_params.LEARNING_RATE,
epsilon=vggish_params.ADAM_EPSILON)
train_op = optimizer.minimize(loss, global_step=global_step)
# Initialize all variables in the model, and then load the pre-trained
# VGGish checkpoint.
sess.run(tf.global_variables_initializer())
vggish_slim.load_vggish_slim_checkpoint(sess, FLAGS.checkpoint)
# The training loop.
features_input = sess.graph.get_tensor_by_name(
vggish_params.INPUT_TENSOR_NAME)
accuracy_scores = []
for epoch in range(num_epochs):#FLAGS.num_batches):
epoch_loss = 0
i=0
while i < len(X_train):
start = i
end = i+batch_size
batch_x = np.array(X_train[start:end])
batch_y = np.array(y_train[start:end])
_, c = sess.run([train_op, loss], feed_dict={features_input: batch_x, labels_input: batch_y})
epoch_loss += c
i+=batch_size
#print no. of epochs and loss
print('Epoch', epoch+1, 'completed out of', num_epochs,', loss:',epoch_loss) #FLAGS.num_batches,', loss:',epoch_loss)
#If these lines are left here, it will evaluate on the test data every iteration and print accuracy
#note this adds a small computational cost
correct = tf.equal(tf.argmax(logits, 1), tf.argmax(labels_input, 1)) #This line returns the max value of each array, which we want to be the same (think the prediction/logits is value given to each class with the highest value being the best match)
accuracy = tf.reduce_mean(tf.cast(correct, 'float')) #changes correct to type: float
accuracy1 = accuracy.eval({features_input:X_test, labels_input:y_test})
accuracy_scores.append(accuracy1)
print('Accuracy:', accuracy1)#TF is smart so just knows to feed it through the model without us seeming to tell it to.
#Save predictions for test data
predictions_sigm = logits.eval(feed_dict = {features_input:X_test}) #not really _sigm, change back later
#print(predictions_sigm) #shows table of predictions, meaningless if saving at each epoch
test_preds = pd.DataFrame(predictions_sigm, columns = col_names) #converts predictions to df
true_class = np.argmax(y_test, axis = 1) #This saves the true class
test_preds['True class'] = true_class #This adds true class to the df
#Saves csv file of table of predictions for test data. NB. header will not save when using np.text for some reason
np.savetxt("/content/drive/MyDrive/..."+"Epoch_"+str(epoch+1)+"_Accuracy_"+str(accuracy1), test_preds.values, delimiter=",")
if __name__ == '__main__':
tf.app.run()
#'An exception has occurred, use %tb to see the full traceback.' error will occur, fear not, this just means its finished (perhaps as its exited the tensorflow session?)

Feed Iterator to Tensorflow Graph

I have a tf.data.Iterator created with make_one_shot_iterator() and want to use it to train my (existing) model.
Currently my training looks like this
input_node = tf.placeholder(tf.float32, shape=(None, height, width, channels))
net = models.ResNet50UpProj({'data': input_node}, batch_size, keep_prob=True,is_training=True)
labels = tf.placeholder(tf.float32, shape=(None, width, height, 1))
huberloss = tf.losses.huber_loss(predictions=net.get_output(),labels=labels)
And then calling
sess.run(train_op, feed_dict={labels:output_img, input_node:input_img})
After training I can get a prediction like that:
pred = sess.run(net.get_output(), feed_dict={input_node: img})
Now with an iterator I tried something like this
next_element = iterator.get_next()
Passing the input data like this:
net = models.ResNet50UpProj({'data': next_element[0]}, batch_size, keep_prob=True,is_training=True)
Defining the loss function like this:
huberloss = tf.losses.huber_loss(predictions=net.get_output(),labels=next_element[1])
And executing the training as simple as while iterating over the iterator automatically with every call of this:
sess.run(train_op)
My problem is: After training I can't make any prediction. Or rather I don't know the proper way of using the iterator in my case.
Solution 1: create a separate sub-graph just for inference, especially when you have layers like batch normalization and dropout (is_training=False).
# The following code assumes that you create variables with `tf.get_variable`.
# If you create variables manually, you have to reuse them manually.
with tf.variable_scope('somename'):
net = models.ResNet50UpProj({'data': next_element[0]}, batch_size, keep_prob=True, is_training=True)
with tf.variable_scope('somename', reuse=True):
net_for_eval = models.ResNet50UpProj({'data': some_placeholder_or_inference_data_iterator}, batch_size, keep_prob=True, is_training=False)
Solution 2: use feed_dict. You can replace almost any tf.Tensor, not just tf.placeholder with a feed dict.
sess.run(huber_loss, {next_element[0]: inference_image, next_element[1]: inference_labels})

Tensorflow error concerning the shape of placeholders

I'm very new to TensorFlow and I try to understand the concept of Placeholders.
Let's say I have a feature set with the shape of 100x4. So I have a 100 rows of 4 different features. The target is then of a 100x1 shape. If I want to use both matrices as a training set. What I do is:
X = tf.placeholder(tf.float64, shape=X_train.shape)
Y = tf.placeholder(tf.float64, shape=y_train.shape)
W = tf.Variable(tf.random_normal([4, 1]), name="weight",dtype=tf.float32)
b = tf.Variable(rng.randn(), name="bias",dtype=tf.float32)
pred = tf.add(tf.multiply(X, W), b)
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
init = tf.global_variables_initializer()
with tf.Session() as sess:
# Run the initializer
sess.run(init)
# Fit all training data
for epoch in range(training_epochs):
for (x, y) in zip(X_train, y_train):
sess.run(optimizer, feed_dict={X: x, Y: y})
... # some plotting and printing of results
Which then results into a "ValueError: Cannot feed value of shape (...,) for Tensor 'Placeholder:0', which has shape '(..., ...)'". More specifically, the dimensions are not equal for 'sub' in cost function.
Could someone explain how to proceed and why?
Thanks in advance
You should use placeholders if you want to train your data in batches.
Why?
This is done when you have a large dataset, for example if you want to train your classifier on an image classification problem but can't load all of your training images on your memory. What is done instead, is training your model through batch gradient descent. Through this technique only a single batch of images is loaded each time and backpropagation is performed only on that batch. This requires more epochs to converge to a minima but each epoch is faster to train.
How?
You first define two placeholders one for the training examples X and one for their labels Y, with respective shapes (batch_size, 4) and (batch_size, 1) in your case.
Then when you want to train your model you should feed your data into the placeholders through a feed dictionary:
with tf.Session() as sess:
sess.run(train_op, feed_dict={X:x_batch, Y:y_batch}) # train_op is the operation that minimizes your cost function
where x_batch and y_batch should be random batches from your X_train and Y_train arrays, but instead of 100 examples they should have batch_size examples (so that their dimensions match the placeholders' dimensions).
Why you shouldn't do this in your case?
Since you have a small dataset, that is already loaded in your memory you could use regular gradient descent.
How?
Just use variables (tf.Variable()) instead of placeholders.
X = tf.Variable(X_train)
Y = tf.Variable(Y_train)
This will create two Variable type tensors which, when initialized will take the shape and values of X_train and Y_train respectively.
Just don't forget to initialize them in your session:
with tf.Session() as sess:
sess.run(tf.global_variables_initializer()) # initialize variables
sess.run(train_op) # no need for a feed_dict
understand the concept of Placeholders
Placeholders are needed to hold a place for real data that you will feed in future:
x = tf.placeholder(tf.float32, shape=X_train.shape)
logits = nn(x) # making some operations with x in order to calculate logits
s = tf.Session()
logits = s.run(logits, feed_dict={x: X_train})
because we used placeholder to make logits we need to place real data instead of placeholder in order to compute logits
"ValueError: Cannot feed value of shape (...,) for Tensor 'Placeholder:0', which has shape '(..., ...)'"
looks like in feed_dict={x: X_train} your placeholder x has 2nd rank but X_train is 1st rank. Better to double-check your data.

TF: how to create a dataset from user input data

I've started recently to play with tensorflow and, more specifically, with the new dataset API.
I've successfully used a dataset to feed training data to my simple model by plugging dataset's iterators to the nodes of my graph representing input and label. Something like:
input = input_dataset.make_one_shot_iterator().get_next()
label = label_dataset.make_one_shot_iterator().get_next()
Now I'm wondering what to do when I have to do inference on a user input, that is, the user gives me one single input value and I have to make my prediction. If I had a placeholder I would just put the user input in a feed_dict, but with the dataset api I have very little idea how to do something similar. Shall I have a separate graph only for inference in which my input variable is a placeholder?
I've tried already to make a feedable iterator as described here but that only works with a placeholder for strings, while my input are int32.
Thanks for any advice.
For that specific purpose, tensorflow provides tf.placeholder_with_default API
# Create a Dataset
dataset = tf.data.Dataset.zip((input_dataset, label_dataset)).batch(32).repeat(...)
# Create Iterator
input, label = dataset.make_one_shot_iterator()
# Create Placholders
x = tf.placeholder_with_default(input, shape=[...], name='input')
y = tf.placeholder_with_default(label, shape-[...], name='label')
def nn_model(features, labels):
logits = ...
loss = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits_v2(labels=labels, logits=logits))
optimizer = tf.train.AdamOptimizer(learning_rate=0.01).minimize(loss)
return optimizer, loss
# Create Model
train_op, loss_op = nn_model(x, y)
# Training
sess.run(train_op)
# Inference
sess.run(logits, feed_dict={x:..., y:...})

Loading model weights into a new tensorflow graph

Goal
Using tensorflow, I'm trying to train a LSTM model for a certain number of iterations on data that's N timesteps per sample, then slowly increase the number of timesteps per sample as the model trains.
So maybe the RNN model is looking at 4 timesteps per training sample at first. After training for a while, performance levels out. I'd like to now continue training the model with 8 timesteps. This is basically a form of finetuning for RNNs.
Progress
The seemingly most straightforward way to do this would be to save the model after training it for a while, then rebuild a new graph with a new Variable X with more timesteps defined.
Unfortunately, I can't find a way to not hardcode the number of timesteps into my model. But that's ok, because if I recreate the model and fill it with saved weights, the shapes of the model shapes should be the same so it should work.
So I'm running the model a first time to generate a save file. Then I'm loading that save file and trying to populate a new graph with the weights from the old (almost identical) tensorflow graph.
This has been driving me crazy, so any help is much appreciated.
Code
Here's my code so far:
if MODEL_FILE is not None:
# load from saved model file
new_saver = tf.train.import_meta_graph(MODEL_FILE + '.meta')
weights = {
'out': tf.Variable(tf.random_uniform([LSTM_SIZE, n_outputs_sm]))
}
biases = {
'out': tf.Variable(tf.random_uniform([n_outputs_sm]))
}
# setup input X and output Y graph variables
x = tf.placeholder('float', [None, NUM_TIMESTEPS, n_input], name='input_x')
y = tf.placeholder('float', [None, n_outputs_sm], name='output_y')
# Feed forward function to get the RNN output. We're using a fancy type of LSTM cell.
def TFEncoderRNN(inp, weights, biases):
# current_input_shape: (batch_size, n_steps, n_input
# required shape: 'n_steps' tensors list of shape (batch_size, n_input)
inp = tf.unstack(inp, NUM_TIMESTEPS, 1)
lstm_cell = tf.contrib.rnn.LayerNormBasicLSTMCell(LSTM_SIZE, dropout_keep_prob=DROPOUT)
outputs, states = tf.contrib.rnn.static_rnn(lstm_cell, inp, dtype=tf.float32)
return tf.matmul(outputs[-1], weights['out']) + biases['out']
# we'll be able to call this to get our model output
pred = TFEncoderRNN(x, weights, biases)
# define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
# I define some more stuff here I'll leave out for brevity
init = None
if new_saver:
new_saver.restore(sess, './' + MODEL_FILE)
init = tf.initialize_variables([global_step])
else:
init = tf.global_variables_initializer()
sess.run(init)
######
### TRAIN AND STUFF
######
print "Optimization finished!"
# save the current graph, you can just run this script again to
# continue training
if SAVE_MODEL:
print "Saving model"
saver = tf.train.Saver()
saver.save(sess, 'tf_model_001')
Any ideas on how to move my trained model weights into a newly created graph/model?
The seemingly most straightforward way to do this would be to save the model after training it for a while, then rebuild a new graph with a new Variable X with more timesteps defined.
Actually, this is what tf.nn.dynamic_rnn is for -- the same model works for any sequence length.

Categories

Resources