Tensorflow: execute computations and avoid memory exceeded - python

I try to compute tensorflow task for data size (624003, 17424), that was getting from text using CountVectorizer.
I always get an error tensorflow.python.framework.errors_impl.InternalError: Dst tensor is not initialized
But if I use sample of data like (213556, 11605) sample it works well.
But after increasing size of dataset it fail.
I try to use this code for tensorflow
batch_size = 1024
X = tf.placeholder(tf.float32, shape=(None, X_train.shape[1]), name="X")
y = tf.placeholder(tf.float32, shape=(None, y_train.shape[1]), name="y")
# set model weights
weights = tf.Variable(tf.random_normal([X_train.shape[1], y_train.shape[1]], stddev=0.5), name="weights")
# construct model
y_pred = tf.nn.sigmoid(tf.matmul(X, weights))
# minimize error using cross entropy
# cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(y_pred), reduction_indices=1))
cost = tf.reduce_mean(-(y*tf.log(y_pred) + (1 - y)*tf.log(1 - y_pred)))
optimizer_01 = tf.train.AdamOptimizer(learning_rate=0.01).minimize(cost)
optimizer_001 = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
# saving model weights
saver = tf.train.Saver({"weights": weights})
# variables initializing
init = tf.global_variables_initializer()
# starting session
with tf.Session(config=tf.ConfigProto(device_count={'GPU': 0})) as sess:
sess.run(init)
And in main block I train on train data and get acc of test data.
How can I learn on all train data and avoid memory exceeded?
For batch I use next function
def optimize(session, optimizer, X_train, X_test, y_train, y_test, epoch=1):
for epoch in range(epoch):
for batch_i, (start, end) in enumerate(split(0, X_train.shape[0], batch_size)):
x_batch, y_true_batch, = X_train[start:end].toarray(), y_train[start:end]
feed_dict_train = {X: x_batch, y: y_true_batch}
session.run(optimizer, feed_dict=feed_dict_train)
feed_dict_test = {X: X_test.toarray(), y: y_test}
cost_step_test = session.run(cost, feed_dict={X: X_test.toarray(), y: y_test})

(624003, 17424) Tensor is about 40GBytes. So you shouldn't allocate such a big Tensor.
You need to give up full-batch training, and switch to mini-batch training.

Related

Is there a way to configure the output shape of a RNN?

I'm trying to create a RNN to guess what notes are being played on a piano, given a sound file of piano notes (WAV format). I'm currently cutting the WAV clips into ten-second chunks (2D), padding shorter sections to 10 seconds with zeroes so the input is all regular. However, when I pass in the clips to the RNN, it gives an output of one less dimension (1D) (when taking the last state - should I be taking the state series?).
I've created a simpler RNN to analyze single notes files (2D) and produce one output (1D), which has been successful. However, when trying to apply this same technique to full clips with multiple notes and notes starting/stopping it seems to break down, as I can't seem to change the output shape.
def weight_variable(shape):
initer = tf.truncated_normal_initializer(stddev=0.01)
return tf.get_variable('W', dtype=tf.float32, shape=shape, initializer=initer)
def bias_variable(shape):
initial = tf.constant(0., shape=shape, dtype=tf.float32)
return tf.get_variable('b', dtype=tf.float32,initializer=initial)
def RNN(x, weights, biases, timesteps, num_hidden):
x = tf.unstack(x, timesteps, 1)
# Define a rnn cell with tensorflow
lstm_cell = rnn.LSTMCell(num_hidden)
states_series, current_state = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)
return tf.matmul(current_state[1], weights) + biases
# return [tf.matmul(temp,weights) + biases for temp in states_series]
# does this even make sense
# x is for data, y is for targets, shapes are [index, time, frequency], [index, time, output note (s)] respectively
x_train, x_valid, y_train, y_valid = load_data() # removed test
print("Size of:")
print("- Training-set:\t\t{}".format(y_train.shape[0]))
print("- Validation-set:\t{}".format(y_valid.shape[0]))
# print("- Test-set\t{}".format(len(y_test)))
learning_rate = 0.001 # The optimization initial learning rate
epochs = 1000 # Total number of training epochs
batch_size = 100 # Training batch size
display_freq = 100 # Frequency of displaying the training results
threshold = 0.7 # Threshold for determining a "note"
num_hidden_units = 15 # Number of hidden units of the RNN
# Placeholders for inputs (x) and outputs(y)
x = tf.placeholder(tf.float32, shape=(None, stepCount, num_input))
y = tf.placeholder(tf.float32, shape=(None, stepCount, n_classes))
# create weight matrix initialized randomly from N~(0, 0.01)
W = weight_variable(shape=[num_hidden_units, n_classes])
# create bias vector initialized as zero
b = bias_variable(shape=[n_classes])
output_logits = RNN(x, W, b, stepCount, num_hidden_units)
y_pred = tf.nn.softmax(output_logits)
# Define the loss function, optimizer, and accuracy, etc.
# (code removed, irrelevant)
# Creating the op for initializing all variables
init = tf.global_variables_initializer()
sess = tf.InteractiveSession()
sess.run(init)
global_step = 0
# Number of training iterations in each epoch
num_tr_iter = int(y_train.shape[0] / batch_size)
for epoch in range(epochs):
print('Training epoch: {}'.format(epoch + 1))
x_train, y_train = randomize(x_train, y_train)
for iteration in range(num_tr_iter):
global_step += 1
start = iteration * batch_size
end = (iteration + 1) * batch_size
x_batch, y_batch = get_next_batch(x_train, y_train, start, end)
# Run optimization op (backprop)
feed_dict_batch = {x: x_batch, y: y_batch}
sess.run(optimizer, feed_dict=feed_dict_batch)
if iteration % display_freq == 0:
# Calculate and display the batch loss and accuracy
loss_batch, acc_batch = sess.run([loss, accuracy],
feed_dict=feed_dict_batch)
print("iter {0:3d}:\t Loss={1:.2f},\tTraining Accuracy={2:.01%}".
format(iteration, loss_batch, acc_batch))
testLoss.append(loss_batch)
testAcc.append(acc_batch)
# Run validation after every epoch
feed_dict_valid = {x: x_valid[:1000].reshape((-1, stepCount, num_input)), y: y_valid[:1000]}
loss_valid, acc_valid = sess.run([loss, accuracy], feed_dict=feed_dict_valid)
print('---------------------------------------------------------')
print("Epoch: {0}, validation loss: {1:.2f}, validation accuracy: {2:.01%}".
format(epoch + 1, loss_valid, acc_valid))
print('---------------------------------------------------------')
validLoss.append(loss_valid)
validAcc.append(acc_batch)
Currently, this is outputting a 1D array of predictions, which really does not make sense in my scenario, but I'm not sure how to change it (it should be outputting predictions for each timestep - i.e. predictions of what notes are playing at each moment in time).

Get a prediction from Tensor Flow Model

I want to get predictions from my trained tensor flow model. The following is the code I have for training my model.
def train_model(self, train, test, learning_rate=0.0001, num_epochs=16, minibatch_size=32, print_cost=True, graph_filename='costs'):
# Ensure that model can be rerun without overwriting tf variables
ops.reset_default_graph()
# For reproducibility
tf.set_random_seed(42)
seed = 42
# Get input and output shapes
(n_x, m) = train.images.T.shape
n_y = train.labels.T.shape[0]
costs = []
# Create placeholders of shape (n_x, n_y)
X, Y = self.create_placeholders(n_x, n_y)
# Initialize parameters
parameters = self.initialize_parameters()
# Forward propagation
Z3 = self.forward_propagation(X, parameters)
# Cost function
cost = self.compute_cost(Z3, Y)
# Backpropagation (using Adam optimizer)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
# Initialize variables
init = tf.global_variables_initializer()
# Start session to compute Tensorflow graph
with tf.Session() as sess:
# Run initialization
sess.run(init)
# Training loop
for epoch in range(num_epochs):
epoch_cost = 0.
num_minibatches = int(m / minibatch_size)
seed = seed + 1
for i in range(num_minibatches):
# Get next batch of training data and labels
minibatch_X, minibatch_Y = train.next_batch(minibatch_size)
# Execute optimizer and cost function
_, minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X.T, Y: minibatch_Y.T})
# Update epoch cost
epoch_cost += minibatch_cost / num_minibatches
# Print the cost every epoch
if print_cost == True:
print("Cost after epoch {epoch_num}: {cost}".format(epoch_num=epoch, cost=epoch_cost))
costs.append(epoch_cost)
# Plot costs
plt.figure(figsize=(16,5))
plt.plot(np.squeeze(costs), color='#2A688B')
plt.xlim(0, num_epochs-1)
plt.ylabel("cost")
plt.xlabel("iterations")
plt.title("learning rate = {rate}".format(rate=learning_rate))
plt.savefig(graph_filename, dpi=300)
plt.show()
# Save parameters
parameters = sess.run(parameters)
print("Parameters have been trained!")
# Calculate correct predictions
correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
# Calculate accuracy on test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Train Accuracy:", accuracy.eval({X: train.images.T, Y: train.labels.T}))
print ("Test Accuracy:", accuracy.eval({X: test.images.T, Y: test.labels.T}))
return parameters
After training the model, I want to extract the prediction from the model.
So I add
print(sess.run(accuracy, feed_dict={X: test.images.T}))
But I am seeing the below error after running the above code:
InvalidArgumentError: You must feed a value for placeholder tensor 'Y'
with dtype float and shape [10,?]
[[{{node Y}} = Placeholderdtype=DT_FLOAT, shape=[10,?], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
Any help is welcome..
The tensor accuracy is a function of the tensor correct_prediction, which in turn is a function of (among the rest) Y.
So you're correctly being told that you should feed values for that placeholder too.
I'm assuming Y hold your labels, so it should also make intuitive sense that your feed_dict would also contain the correct Y values.
Hope that helps.
Good luck!

How can I, with TFLearn, interact with the metrics of the training (current validation accuracy, training accuracy etc?

This
is an example from the TFLearn documentation. It shows how to combine TFLearn and Tensorflow, using a TFLearn trainer with a regular Tensorflow graph. However, the current training, test and validation accuracy calculations are not accessible.
import tensorflow as tf
import tflearn
...
# User defined placeholders
with tf.Graph().as_default():
# Placeholders for data and labels
X = tf.placeholder(shape=(None, 784), dtype=tf.float32)
Y = tf.placeholder(shape=(None, 10), dtype=tf.float32)
net = tf.reshape(X, [-1, 28, 28, 1])
# Using TFLearn wrappers for network building
net = tflearn.conv_2d(net, 32, 3, activation='relu')
.
.
.
net = tflearn.fully_connected(net, 10, activation='linear')
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits_v2(
logits=net,
labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=0.01).minimize(loss)
# Initializing the variables
...
# Launch the graph
with tf.Session() as sess:
sess.run(init)
...
for epoch in range(2): # 2 epochs
...
for i in range(total_batch):
batch_xs, batch_ys = mnist_data.train.next_batch(batch_size)
sess.run(optimizer, feed_dict={X: batch_xs, Y: batch_ys})
How do I access the calculated training and validation accuracy at each step in the nested FOR loop?
UPDATE FOR CLARITY:
A solution might be as follows: Using the fit_batch method of the Trainer class, I believe I am calculating the training and validation accuracy during the nested loop.
Does this code calculate the running accuracies as the model trains?
Is there a better way of doing this with TFLearn?
I understand that tensorboard uses these values. Could I retrieve the values from the eventlogs?
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
...
network = input_data(shape=[None, image_size, image_size, num_channels],
data_preprocessing=feature_normalization,
data_augmentation=None,
name='input_d')
.
.
.
network = regression(network, optimizer='SGD',
loss='categorical_crossentropy',
learning_rate=0.05, name='targets')
model_dnn_tr = tflearn.DNN(network, tensorboard_verbose=0)
...
with tf.Session(graph=graph) as session:
...
for step in range(num_steps):
...
batch_data = train_dataset[offset:(offset + batch_size), :, :, :]
batch_labels = train_labels[offset:(offset + batch_size), :]
loss = model_dnn_tr.fit_batch({'input_d' : batch_data}, {'targets':
batch_labels})
if (step % 50 == 0):
trainAccr = accuracy(model_dnn_tr.predict({'input_d' :
batch_data}), batch_labels)
validAccr = accuracy(model_dnn_tr.predict({'input_d' :
valid_dataset}), valid_labels)
testAccr = accuracy(model_dnn_tr.predict({'input_d' : test_dataset}),
test_labels)
UPDATE with The correct answer
Could I retrieve the values from the eventlogs?
Tensorboard does have a means to download the accuracy datasets, but making use of it during training is problematic.
Does this code calculate the running accuracies as the model trains?
In a word. Yes.
The fit_batch method works as one might expect; as does the initial solution I posted below.
However, neither is the prescribed method.
Is there a better way of doing this within TFLearn?
Yes!
In order to o track and interact with the metrics of the training, a Training Callback function should be implemented.
from tflearn import callbacks as cb
class BiasVarianceStrategyCallback(cb.Callback):
def __init__(self, train_acc_thresh,run_id,rel_err=.1):
""" Note: We are free to define our init function however we please. """
def errThrshld(Tran_accuracy=train_acc_thresh,relative_err=rel_err):
Tran_err = round(1-Tran_accuracy,2)
Test_err = ...
Vald_err = ...
Diff_err = ...
return {'Tr':Tran_err,'Vl':Vald_err,'Ts':Test_err,'Df':Diff_err}
return
def update_acc_df(self,training_state,state):
...
return
def on_epoch_begin(self, training_state):
""" """
...
variance_found = ...
if trn_acc_stall or vld_acc_stall:
print("accuracy increase stalled. training epoch:"...
if trn_lss_mvNup or vld_lss_mvNup:
print("loss began increase training:"...
raise StopIteration
return
if variance_found or bias_found:
print("bias:",bias_found,"variance:",variance_found)
raise StopIteration
return
return
def on_batch_end(self, training_state, snapshot=False):
self.update_acc_df(training_state,"batch")
return
def on_epoch_end(self, training_state):
self.update_acc_df(training_state,"epoch")
return
def on_train_end(self, training_state):
self.update_acc_df(training_state,"train")
self.df = self.df.iloc[0:0]
return
Initial solution
The most satisfying solution I found thus far:
Uses the dataset object and iterators to feed data.
Not much different from the fit_batch method in the OP.
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
...
graph = tf.Graph()
with graph.as_default():
...
# create a placeholder to dynamically switch between
# validation and training batch sizes
batch_size_x = tf.placeholder(tf.int64)
data_placeholder = tf.placeholder(tf.float32,
shape=(None, image_size, image_size, num_channels))
labels_placeholder = tf.placeholder(tf.float32, shape=(None, num_labels))
# create dataset: one for training and one for test etc
dataset = tf.data.Dataset.from_tensor_slices((data_placeholder,labels_placeholder)).batch(batch_size_x).repeat()
# create a iterator
iterator = tf.data.Iterator.from_structure(dataset.output_types, dataset.output_shapes)
# get the tensor that will contain data
feature, label = iterator.get_next()
# create the initialisation operations
init_op = iterator.make_initializer(dataset)
valid_data_x = tf.constant(valid_data)
test_data_x = tf.constant(test_data)
# Model.
network = input_data(shape=[None, image_size, image_size, num_channels],
placeholder=data_placeholder,
data_preprocessing=feature_normalization,
data_augmentation=None,
name='input_d')
.
.
.
logits = fully_connected(network,...
# Training computation.
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=labels_placeholder,logits=logits))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
prediction = tf.nn.softmax(logits)
...
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
# initialise iterator with train data
feed_dict = {data_placeholder: train_data,
labels_placeholder: train_data_labels,
batch_size_x: batch_size}
session.run(init_op, feed_dict = feed_dict)
for step in range(num_steps):
batch_data,batch_labels = session.run( [feature, label], feed_dict =
feed_dict )
feed_dict2 = {data_placeholder: batch_data, labels_placeholder: batch_labels}
_, l, predictions = session.run([optimizer, loss, prediction],
feed_dict=feed_dict2)
if (step % 50 == 0):
trainAccrMb = accuracy(predictions, batch_labels)
feed_dict = {data_placeholder: valid_data_x.eval(), labels_placeholder: valid_data_labels }
valid_prediction = session.run(prediction,
feed_dict=feed_dict)
validAccr= accuracy(valid_prediction, valid_data_labels)
feed_dict = {data_placeholder: test_data_x.eval(), labels_placeholder:
test_data_labels }#, batch_size_x: len(valid_data)}
test_prediction = session.run(prediction,
feed_dict=feed_dict)
testAccr = accuracy(test_prediction, test_data_labels)

Tensorflow's loss function returns NAN after changing RNN to LSTM cell

I am training a model to predict Time Series using an RNN model. This model is trained without any issue. Here's the original code:
tf.reset_default_graph()
num_inputs = 1
num_neurons = 100
num_outputs = 1
learning_rate = 0.0001
num_train_iterations = 2000
batch_size = 1
X = tf.placeholder(tf.float32, [None, time_steps-1, num_inputs])
y = tf.placeholder(tf.float32, [None, time_steps-1, num_outputs])
cell = tf.contrib.rnn.OutputProjectionWrapper(
tf.contrib.rnn.BasicRNNCell(num_units=num_neurons, activation=tf.nn.relu),
output_size=num_outputs)
outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.75)
with tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) as sess:
sess.run(init)
for iteration in range(num_train_iterations):
elx,ely = next_batch(training_data, time_steps)
sess.run(train, feed_dict={X: elx, y: ely})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: elx, y: ely})
print(iteration, "\tMSE:", mse)
The problem comes when I change tf.contrib.rnn.BasicRNNCell to tf.contrib.rnn.BasicLSTMCell, there's a huge slowdown in speed and the loss function (MSE variable becomes NAN). My best bet is that MSE is the incorrect loss function and that I should try cross entropy. I searched for similar code and found that tf.nn.softmax_cross_entropy_with_logits() could be the solution but still don't understand how to implement it in my problem.
Usually the "NAN" occurs when your gradients blow up.
Here is some code for tf.softmax. Have a try.
#Output Layer
logit = tf.add(tf.matmul(H1,w2),b2)
cross_entropy =
tf.nn.softmax_cross_entropy_with_logits(logits=logit,labels=Y)
#Cost
cost = (tf.reduce_mean(cross_entropy))
#Optimizer
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
#Prediction
y_pred = tf.nn.softmax(logit)
pred = tf.argmax(y_pred, axis=1 )

tensorflow, mini-batch, tf.placeholder - read state of nodes at given iteration

I want to print the value of MSE at each epoch/batch combination. the code below reports the tensor object representing the mse instead of its value at each iteration:
print("Epoch", epoch, "Batch_Index", batch_index, "MSE:", mse)
Example line of output:
Epoch 0 Batch_Index 0 MSE: Tensor("mse_2:0", shape=(), dtype=float32)
I understand it is because MSE is referencing tf.placeholder nodes which by themselves do not have any data. But once I run the below code:
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
the data should be already available thus values for all nodes depending on that data should be accessible as well, I think requesting an evaluation of the MSE in the print statement results in error
print("Epoch", epoch, "Batch_Index", batch_index, "MSE:", mse.eval())
Output2:
InvalidArgumentError: You must feed a value for placeholder tensor 'X_2' with dtype float and shape [?,9]
...
This tells me that mse.eval() does not see the data defined in sess.run()
Why do we experience such behavior?
How should we change the code to make it report MSA at each specified iteration?
import numpy as np
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing()
m, n = housing.data.shape
housing_data_plus_bias = np.c_[np.ones((m, 1)), housing.data] # ADD COLUMN OF 1s for BIAS!
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaled_housing_data = scaler.fit_transform(housing.data)
scaled_housing_data_plus_bias = np.c_[np.ones((m, 1)), scaled_housing_data]
X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
n_epochs = 100
batch_size = 100
n_batches = int(np.ceil(m / batch_size))
learning_rate = 0.01
def fetch_batch(epoch, batch_index, batch_size):
np.random.seed(epoch * n_batches + batch_index) # not shown in the book
indices = np.random.randint(m, size=batch_size) # not shown
X_batch = scaled_housing_data_plus_bias[indices] # not shown
y_batch = housing.target.reshape(-1, 1)[indices] # not shown
return X_batch, y_batch
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if (epoch % 50 == 0 and batch_index % 100 == 0):
print("Epoch", epoch, "Batch_Index", batch_index, "MSE:", mse)
best_theta = theta.eval()
best_theta
First, I think this kind of debugging and printing and stuff is much easier to do with eager execution enabled in tensorflow.
Without eager execution enabled, "print" in tensorflow will never print the dynamic value of a tensor; it'll only print the name of the tensor, which is rarely what you want. Instead, use tf.Print to inspect the runtime value of the tensor (by doing something like tensor = tf.Print(tensor, [tensor]) as tf.Print does not execute unless its output is used somewhere).
i made it work by modifying the print statement to the following:
print("Epoch", epoch, "Batch_Index", batch_index, "MSE:", mse.eval(feed_dict={X: scaled_housing_data_plus_bias, y: housing_target}))
moreover by referencing complete data set (not batches) i was able to test the generalization of the current batch-based model to the whole sample. It should be easy to extend it to test on the test and hold-out samples as training of the model progresses
i am afraid that such on-the-fly evaluation (even on batches) can have impact on performance of the model. I will do further tests of that.

Categories

Resources