Matrix factorization based recommendation using Tensorflow - python

I am new to tensor Flow and exploring about recommendation system using tensorflow. I have verified few sample codes in in github and come across mostly the same like following as the follwing
https://github.com/songgc/TF-recomm/blob/master/svd_train_val.py
But the question is, how do I pick top recommendation for user U1 in the above code?
If there any sample code or approach, please share. Thanks

It is a little difficult! Basically, when svd returns, it closes the session, and the tensors lose their values (you still keep the graph). There are a few options:
Save the model to a file and restore it later;
Don't put the session in a with tf.Session() as sess: .... block, and instead return the session;
Do the user processing inside the with ... block
The worst option is option 3: you should train your model separately from using it. The best approach is to save your model and weights somewhere, then restore the session. However, you are still left with the question of how you use this session object once you have recovered it. To demonstrate just that part, I am going to solve this problem using option 3, assuming that you know how to restore a session.
def svd(train, test):
samples_per_batch = len(train) // BATCH_SIZE
iter_train = dataio.ShuffleIterator([train["user"],
train["item"],
train["rate"]],
batch_size=BATCH_SIZE)
iter_test = dataio.OneEpochIterator([test["user"],
test["item"],
test["rate"]],
batch_size=-1)
user_batch = tf.placeholder(tf.int32, shape=[None], name="id_user")
item_batch = tf.placeholder(tf.int32, shape=[None], name="id_item")
rate_batch = tf.placeholder(tf.float32, shape=[None])
infer, regularizer = ops.inference_svd(user_batch, item_batch, user_num=USER_NUM, item_num=ITEM_NUM, dim=DIM,
device=DEVICE)
global_step = tf.contrib.framework.get_or_create_global_step()
_, train_op = ops.optimization(infer, regularizer, rate_batch, learning_rate=0.001, reg=0.05, device=DEVICE)
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
summary_writer = tf.summary.FileWriter(logdir="/tmp/svd/log", graph=sess.graph)
print("{} {} {} {}".format("epoch", "train_error", "val_error", "elapsed_time"))
errors = deque(maxlen=samples_per_batch)
start = time.time()
for i in range(EPOCH_MAX * samples_per_batch):
users, items, rates = next(iter_train)
_, pred_batch = sess.run([train_op, infer], feed_dict={user_batch: users, item_batch: items, rate_batch: rates})
pred_batch = clip(pred_batch)
errors.append(np.power(pred_batch - rates, 2))
if i % samples_per_batch == 0:
train_err = np.sqrt(np.mean(errors))
test_err2 = np.array([])
for users, items, rates in iter_test:
pred_batch = sess.run(infer, feed_dict={user_batch: users,item_batch: items})
pred_batch = clip(pred_batch)
test_err2 = np.append(test_err2, np.power(pred_batch - rates, 2))
end = time.time()
test_err = np.sqrt(np.mean(test_err2))
print("{:3d} {:f} {:f} {:f}(s)".format(i // samples_per_batch, train_err, test_err, end - start))
train_err_summary = make_scalar_summary("training_error", train_err)
test_err_summary = make_scalar_summary("test_error", test_err)
summary_writer.add_summary(train_err_summary, i)
summary_writer.add_summary(test_err_summary, i)
start = end
# Get the top rated movie for user #1 for every item in the set
userNumber = 1
user_prediction = sess.run(infer, feed_dict={user_batch: np.array([userNumber]), item_batch: np.array(range(ITEM_NUM))})
# The index number is the same as the item number. Orders from lowest (least recommended)
# to largeset
index_rating_order = np.argsort(user_prediction)
print "Top ten recommended items for user {} are".format(userNumber)
print index_rating_order[-10:][::-1] # at the end, reverse the list
# If you want to include the score:
items_to_choose = index_rating_order[-10:][::-1]
for item, score in zip(items_to_choose, user_prediction[items_to_choose]):
print "{}: {}".format(item,score)
The only changes I made begin at the first commented line. To emphasize again, best practice would be to train in this function, but to actually make your predictions separately.

Related

How to re-load a saved model (with graph?) to create the same results on future testing data?

I have trained a model and saved all the files (meta, index, checkpoint, etc.) using the saver = tf.compat.v1.train.Saver() function, and now I want to re-load that model in order to test on new data. It works fine, but my question is, every time I run the restored model on the same dataset (i.e. run it once on a testing dataset, and then start it over and run it again on that same dataset) I get very different results. I'm hoping to be able to run it over and over again on the same dataset, but get the same results.
I have two separate .py files, one for training and one for testing/loading the model to test on the dataset. My training variables/placeholders look something like this in the training.py file (in case it's relevant):
# set some tensorflow variables and placeholders, etc.
self.X = tf.compat.v1.placeholder(tf.float32, (None, self.state_size))
self.REWARDS = tf.compat.v1.placeholder(tf.float32, (None))
self.ACTIONS = tf.compat.v1.placeholder(tf.int32, (None))
feed_forward = tf.layers.dense(self.X, self.LAYER_SIZE, activation = tf.nn.relu)
self.logits = tf.layers.dense(feed_forward, self.OUTPUT_SIZE, activation = tf.nn.softmax)
input_y = tf.one_hot(self.ACTIONS, self.OUTPUT_SIZE)
loglike = tf.math.log((input_y * (input_y - self.logits) + (1 - input_y) * (input_y + self.logits)) + 1) # tf.log
rewards = tf.tile(tf.reshape(self.REWARDS, (-1,1)), [1, self.OUTPUT_SIZE])
self.cost = -tf.reduce_mean(loglike * (rewards + 1)) # leave this as a negative, so that the minimize function of the Adam optimizer will keep improving
# Adam Optimizer
self.optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate = self.LEARNING_RATE).minimize(self.cost) # minimize(self.cost)
# Start the Tensorflow session
self.sess = tf.compat.v1.InteractiveSession()
self.sess.run(tf.compat.v1.global_variables_initializer())
...
saver = tf.compat.v1.train.Saver()
save_path = saver.save(self.sess, "./agent_output/" + name + "_model")
And in the testing.py file, it looks something like this:
...
# Start the Tensorflow session
self.sess = tf.compat.v1.InteractiveSession()
new_saver = tf.train.import_meta_graph('./agent_output/' + name + '_model.meta')
new_saver.restore(self.sess, tf.train.latest_checkpoint('./agent_output/'))
print('Model loaded step 1')
#saver = tf.compat.v1.train.Saver()
#saver.restore(self.sess, "./agent_output/" + name + "_model")
#print('Model Restored!')
self.sess.run(tf.compat.v1.global_variables_initializer())
Just to give you an idea of what I'm working with. As you can see, I've tried the import_meta_graph and the commented out saver.restore method, but I think I'm missing something, or if it's even possible in my case?
I'm just hoping someone can point me in the right direction. What I've discovered on my own is that there should be a way to not only load the variables, but also the graph? Or maybe I need to implement that during the training? I'm running Python 3.6 and Tensorflow 1.14 (I believe? Not 2.0).
Your problem is probably running self.sess.run(tf.compat.v1.global_variables_initializer()) after restoring the model. You only need to run this for a fresh model not a restored one. Try it without this line.

Tensorflow: classification only based on first input

Getting to know Tensorflow, I built a toy network for classification. It consists of 15 input nodes for features identical to the one-hot encoding of the corresponding class label (with indexing beginning at 1) - so the data to be loaded from an input CSV may look like this:
1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1
0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,2
...
0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,15
The network has only one hidden layer and an output layer, the latter containing probabilities for a given class. Here's my problem: during training the network assings a growing probability for whatever was fed in as the very first input.
Here are the relevant lines of code (some lines are omitted):
# number_of_p : number of samples
# number_of_a : number of attributes (features) -> 15
# number_of_s : number of styles (labels) -> 15
# function for generating hidden layers
# nodes is a list of nodes in each layer (len(nodes) = number of hidden layers)
def hidden_generation(nodes):
hidden_nodes = [number_of_a] + nodes + [number_of_s]
number_of_layers = len(hidden_nodes) - 1
print(hidden_nodes)
hidden_layer = list()
for i in range (0,number_of_layers):
hidden_layer.append(tf.zeros([hidden_nodes[i],batch_size]))
hidden_weights = list()
for i in range (0,number_of_layers):
hidden_weights.append(tf.Variable(tf.random_normal([hidden_nodes[i+1], hidden_nodes[i]])))
hidden_biases = list()
for i in range (0,number_of_layers):
hidden_biases.append(tf.Variable(tf.zeros([hidden_nodes[i+1],batch_size])))
return hidden_layer, hidden_weights, hidden_biases
#loss function
def loss(labels, logits):
cross_entropy = tf.losses.softmax_cross_entropy(
onehot_labels = labels, logits = logits)
return tf.reduce_mean(cross_entropy, name = 'xentropy_mean')
hidden_layer, hidden_weights, hidden_biases = hidden_generation(hidden_layers)
with tf.Session() as training_sess:
training_sess.run(tf.global_variables_initializer())
training_sess.run(a_iterator.initializer, feed_dict = {a_placeholder_feed: training_set.data})
current_a = training_sess.run(next_a)
training_sess.run(s_iterator.initializer, feed_dict = {s_placeholder_feed: training_set.target})
current_s = training_sess.run(next_s)
s_one_hot = training_sess.run(tf.one_hot((current_s - 1), number_of_s))
for i in range (1,len(hidden_layers)+1):
hidden_layer[i] = tf.tanh(tf.matmul(hidden_weights[i-1], (hidden_layer[i-1])) + hidden_biases[i-1])
output = tf.nn.softmax(tf.transpose(tf.matmul(hidden_weights[-1],hidden_layer[-1]) + hidden_biases[-1]))
optimizer = tf.train.GradientDescentOptimizer(learning_rate = 0.1)
# using the AdamOptimizer does not help, nor does choosing a much bigger and smaller learning rate
train = optimizer.minimize(loss(s_one_hot, output))
training_sess.run(train)
for i in range (0, (number_of_p)):
current_a = training_sess.run(next_a)
current_s = training_sess.run(next_s)
s_one_hot = training_sess.run(tf.transpose(tf.one_hot((current_s - 1), number_of_s)))
# (no idea why I have to declare those twice for the datastream to move)
training_sess.run(train)
I assume the loss function is being declared at the wrong place and always references the same vectors. However, replacing the loss function did not help me by now.
I will gladly provide the rest of the code if anyone is kind enough to help me.
EDIT: I've already discovered and fixed one major (and dumb) mistake: weights go before values node values in tf.matmul.
You do not want to be declaring the training op over and over again. That is unnecessary and like you pointed out is slower. You are not feeding your current_a into the neural net. So you are not going to be getting new outputs, also how you are using iterators isn't correct which could also be the cause of the problem.
with tf.Session() as training_sess:
training_sess.run(tf.global_variables_initializer())
training_sess.run(a_iterator.initializer, feed_dict = {a_placeholder_feed: training_set.data})
current_a = training_sess.run(next_a)
training_sess.run(s_iterator.initializer, feed_dict = {s_placeholder_feed: training_set.target})
current_s = training_sess.run(next_s)
s_one_hot = training_sess.run(tf.one_hot((current_s - 1), number_of_s))
for i in range (1,len(hidden_layers)+1):
hidden_layer[i] = tf.tanh(tf.matmul(hidden_weights[i-1], (hidden_layer[i-1])) + hidden_biases[i-1])
output = tf.nn.softmax(tf.transpose(tf.matmul(hidden_weights[-1],hidden_layer[-1]) + hidden_biases[-1]))
optimizer = tf.train.GradientDescentOptimizer(learning_rate = 0.1)
# using the AdamOptimizer does not help, nor does choosing a much bigger and smaller learning rate
train = optimizer.minimize(loss(s_one_hot, output))
training_sess.run(train)
for i in range (0, (number_of_p)):
current_a = training_sess.run(next_a)
current_s = training_sess.run(next_s)
s_one_hot = training_sess.run(tf.transpose(tf.one_hot((current_s - 1), number_of_s)))
# (no idea why I have to declare those twice for the datastream to move)
training_sess.run(train)
Here is some pseudocode to help you get the correct data flow. I would do the one hot encoding prior to this just to make things easier for loading the data during training.
train_dataset = tf.data.Dataset.from_tensor_slices((inputs, targets))
train_dataset = train_dataset.batch(batch_size)
train_dataset = train_dataset.repeat(num_epochs)
iterator = train_dataset.make_one_shot_iterator()
next_inputs, next_targets = iterator.get_next()
# Define Training procedure
global_step = tf.Variable(0, name="global_step", trainable=False)
loss = Neural_net_function(next_inputs, next_targets)
optimizer = tf.train.AdamOptimizer(learning_rate)
grads_and_vars = optimizer.compute_gradients(loss)
train_op = optimizer.apply_gradients(grads_and_vars, global_step=global_step)
with tf.Session() as training_sess:
for i in range(number_of_training_samples * num_epochs):
taining_sess.run(train_op)
Solved it! Backpropagation works properly when the training procedure is redeclared for every new dataset.
for i in range (0, (number_of_p)):
current_a = training_sess.run(next_a)
current_s = training_sess.run(next_s)
s_one_hot = training_sess.run(tf.transpose(tf.one_hot((current_s - 1), number_of_s)))
optimizer = tf.train.GradientDescentOptimizer(learning_rate = 0.1)
train = optimizer.minimize(loss(s_one_hot, output))
training_sess.run(train)
...makes training considerably slower, but it works.

Tensorflow and reading binary data properly

I am trying to properly read in my own binary data to Tensorflow based on Fixed length records section of this tutorial, and by looking at the read_cifar10 function here. Mind you I am new to tensorflow, so my understanding may be off.
My Data
My files are binary with float32 type. The first 32 bit sample is the label, and the remaining 256 samples are the data. I want to reshape the data at the end to a [2, 128] matrix.
My Code So far:
import tensorflow as tf
import os
def read_data(filename_queue):
item_type = tf.float32
label_items = 1
data_items = 256
label_bytes = label_items * item_type.size
data_bytes = data_items * item_type.size
record_bytes = label_bytes + data_bytes
reader = tf.FixedLengthRecordReader(record_bytes=record_bytes)
key, value = reader.read(filename_queue)
record_data = tf.decode_raw(value, item_type)
# labels = tf.cast(tf.strided_slice(record_data, [0], [label_items]), tf.int32)
label = tf.strided_slice(record_data, [0], [label_items])
data0 = tf.strided_slice(record_data, [label_items], [label_items + data_items])
data = tf.reshape(data0, [2, data_items/2])
return data, label
if __name__ == '__main__':
os.environ["CUDA_VISIBLE_DEVICES"] = "0" # Set GPU device
datafiles = ['train_0000.dat', 'train_0001.dat']
num_epochs = 2
filename_queue = tf.train.string_input_producer(datafiles, num_epochs=num_epochs, shuffle=True)
data, label = read_data(filename_queue)
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
(x, y) = read_data(filename_queue)
print(y.eval())
This code hands at the print(y.eval()), but I fear I have much bigger issues than that.
Question:
When I execute this, I get a data and label tensor returned. The problem is I don't quite understand how to actually read the data from the tensor. For example, I understand the autoencoder example here, however this has a mnist.train.next_batch(batch_size) function that is called to read the next batch. Do I need to write that for my function, or is it handled by something internal to my read_data() function. If I need to write that function, what does it look like?
Are their any other obvious things I'm missing? My goal in using this method is to reduce I/O overhead, and not store all of the data in memory, since my file are quite large.
Thanks in advance.
Yes. You are pretty much done. At this point you need to:
1) Write your neural network model model which is supposed to take your data and return a label.
2) Write your cost function C which takes the network prediction and the true label and gives you a cost.
3) Choose and optimizer.
4) Put everything together:
opt = tf.AdamOptimizer(learning_rate=0.001)
datafiles = ['train_0000.dat', 'train_0001.dat']
num_epochs = 2
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
filename_queue = tf.train.string_input_producer(datafiles, num_epochs=num_epochs, shuffle=True)
data, label = read_data(filename_queue)
example_batch, label_batch = tf.train.shuffle_batch(
[data, label], batch_size=128)
y_pred = model(data)
loss = C(label, y_pred)
After which you iterate and minimize the loss with:
opt.minimize(loss)
See also tf.train.string_input_producer behavior in a loop for related information.

Why not train GANs like this?

I'm new to generative networks and I decided to first try it on my own before seeing up a code. These are the steps I used to train my GAN.
[lib: tensorflow]
1) Train a discriminator on the dataset. (I used a dataset of 2 features with labels of either 'mediatating' or 'not meditating', dataset: https://drive.google.com/open?id=0B5DaSp-aTU-KSmZtVmFoc0hRa3c )
2) Once the the discriminator is trained, save it.
3) Make another file with for another feed forward network (or any other depending on your dataset). This feed forward network is the generator.
4) Once the generator is constructed, restore the discriminator and define a loss function for generator such that it learns to fool the discriminator. (this didn't work in tensorflow because sess.run() doesn't return a tf tensor and the path between G and D breaks but should work when done from scratch)
d_output = sess.run(graph.get_tensor_by_name('ol:0'), feed_dict={graph.get_tensor_by_name('features_placeholder:0'): g_output})
print(d_output)
optimize_for = tf.constant([[0.0]*10]) #not meditating
g_loss = -tf.reduce_mean((d_output - optimize_for)**2)
train = tf.train.GradientDescentOptimizer(learning_rate).minimize(g_loss)
Why don't we train a generator like this? This seems so much simpler. It's true I couldn't manage to run this on tensorflow but this should be possible if I do from scratch.
Full code:
Discriminator:
import pandas as pd
import tensorflow as tf
from sklearn.utils import shuffle
data = pd.read_csv("E:/workspace_py/datasets/simdata/linear_data_train.csv")
learning_rate = 0.001
batch_size = 1
n_epochs = 1000
n_examples = 999 # This is highly unsatisfying >:3
n_iteration = int(n_examples/batch_size)
features = tf.placeholder('float', [None, 2], name='features_placeholder')
labels = tf.placeholder('float', [None, 1], name = 'labels_placeholder')
weights = {
'ol': tf.Variable(tf.random_normal([2, 1]), name = 'w_ol')
}
biases = {
'ol': tf.Variable(tf.random_normal([1]), name = 'b_ol')
}
ol = tf.nn.sigmoid(tf.add(tf.matmul(features, weights['ol']), biases['ol']), name = 'ol')
loss = tf.reduce_mean((labels - ol)**2, name = 'loss')
train = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for epoch in range(n_epochs):
ptr = 0
data = shuffle(data)
data_f = data.drop("lbl", axis = 1)
data_l = data.drop(["f1", "f2"], axis = 1)
for iteration in range(n_iteration):
epoch_x = data_f[ptr: ptr + batch_size]
epoch_y = data_l[ptr: ptr + batch_size]
ptr = ptr + batch_size
_, lss = sess.run([train, loss], feed_dict={features: epoch_x, labels:epoch_y})
print("Loss # epoch ", epoch, " = ", lss)
print("\nTesting...\n")
data = pd.read_csv("E:/workspace_py/datasets/simdata/linear_data_eval.csv")
test_data_l = data.drop(["f1", "f2"], axis = 1)
test_data_f = data.drop("lbl", axis = 1)
print(sess.run(ol, feed_dict={features: test_data_f}))
print(test_data_l)
print("Saving model...")
saver = tf.train.Saver()
saver.save(sess, save_path="E:/workspace_py/saved_models/meditation_disciminative_model.ckpt")
sess.close()
Generator:
import tensorflow as tf
# hyper parameters
learning_rate = 0.1
# batch_size = 1
n_epochs = 100
from numpy import random
noise = random.rand(10, 2)
print(noise)
# Model
input_placeholder = tf.placeholder('float', [None, 2])
weights = {
'hl1': tf.Variable(tf.random_normal([2, 3]), name = 'w_hl1'),
'ol': tf.Variable(tf.random_normal([3, 2]), name = 'w_ol')
}
biases = {
'hl1': tf.Variable(tf.zeros([3]), name = 'b_hl1'),
'ol': tf.Variable(tf.zeros([2]), name = 'b_ol')
}
hl1 = tf.add(tf.matmul(input_placeholder, weights['hl1']), biases['hl1'])
ol = tf.add(tf.matmul(hl1, weights['ol']), biases['ol'])
sess = tf.Session()
sess.run(tf.global_variables_initializer())
g_output = sess.run(ol, feed_dict={input_placeholder: noise})
# restoring discriminator
saver = tf.train.import_meta_graph("E:/workspace_py/saved_models/meditation_disciminative_model.ckpt.meta")
saver.restore(sess, tf.train.latest_checkpoint('E:/workspace_py/saved_models/'))
graph = tf.get_default_graph()
d_output = sess.run(graph.get_tensor_by_name('ol:0'), feed_dict={graph.get_tensor_by_name('features_placeholder:0'): g_output})
print(d_output)
optimize_for = tf.constant([[0.0]*10])
g_loss = -tf.reduce_mean((d_output - optimize_for)**2)
train = tf.train.GradientDescentOptimizer(learning_rate).minimize(g_loss)
The discriminator's purpose isn't to classify your original data, or really discriminate anything about your original data. Its sole purpose is to discriminate your generator's output from original output.
Think of an example of an art forger. Your dataset is all original paintings. Your generator network G is an art forger, and your discriminator D is a detective whose sole purpose in life is to find forgeries made by G.
D can't learn much just by looking at original paintings. What's really important for him is to figure out what sets G's forgeries apart from everything else. G can't make any money selling forgeries if all his pieces are discovered and marked as such by D, so he must learn how to thwart D.
This creates an environment where G is constantly trying to make his pieces look more "like" original artwork, and D is constantly getting better at finding the nuances to G's forgery style. The better D gets, the better G needs to be in order to make a living. They each get better at their task until they (theoretically) reach some Nash equilibrium defined by the complexity of the networks and the data they're trying to forge.
That's why D needs to be trained back-and-forth with G, because it needs to know and adapt to G's particular nuances (which change over time as G learns and adapts), not just find some average definition of "not forged". By making D hunt G specifically, you force G to become a better forger, and thus end up with a better generator network. If you just train D once, then G can learn some easy, obvious, unimportant way to beat D and never actually produce very good forgeries.

Tensorflow program give different answers after deployed on aws lambda

I have wrote a program with Tensorflow that identifies a number of figures in an image. The model is trained with a function and then used with another function to label the figures. The training have been done on my computer and the resulting model upload to aws with the solve function.
I my computer it works well, but when create a lambda in aws it works strange and start giving different answers with the same test data.
The model in the solve function is this:
# Recreate neural network from model file generated during training
# input
x = tf.placeholder(tf.float32, [None, size_of_image])
# weights
W = tf.Variable(tf.zeros([size_of_image, num_chars]))
# biases
b = tf.Variable(tf.zeros([num_chars]))
The solve function code to label the figures is this:
for testi in range(captcha_letters_num):
# load model from file
saver = tf.train.import_meta_graph(model_path + '.meta',
clear_devices=True)
saver.restore(sess, model_path)
# Data to label
test_x = np.asarray(char_imgs[testi], dtype=np.float32)
predict_op = model(test_x, W, b)
op = sess.run(predict_op, feed_dict={x: test_x})
# find max probability from the probability distribution returned by softmax
max_probability = op[0][0]
max_probability_index = -1
for i in range(num_chars):
if op[0][i] > max_probability:
max_probability = op[0][i]
max_probability_index = i
# append it to final output
final_text += char_map_list[max_probability_index]
# Reset the model so it can be used again
tf.reset_default_graph()
With the same test data it gives different answers, don't know why.
Solved!
What I finally do was to keep the Session outside the loop and initialize the variables. After ending the loop, reset the graph.
saver = tf.train.Saver()
sess = tf.Session()
# Initialize variables
sess.run(tf.global_variables_initializer())
.
.
.
# passing each of the 5 characters through the NNet
for testi in range(captcha_letters_num):
# Data to label
test_x = np.asarray(char_imgs[testi], dtype=np.float32)
predict_op = model(test_x, W, b)
op = sess.run(predict_op, feed_dict={x: test_x})
# find max probability from the probability distribution returned by softmax
max_probability = op[0][0]
max_probability_index = -1
for i in range(num_chars):
if op[0][i] > max_probability:
max_probability = op[0][i]
max_probability_index = i
# append it to final output
final_text += char_map_list[max_probability_index]
# Reset the model so it can be used again
tf.reset_default_graph()
sess.close()

Categories

Resources