I'm trying to fine tune inception models, and validate it with test data. But all the examples given at tensorflow slime web page only either fine-tuning or testing, there is not any example that doing both at same graph and session.
Basically I want to this.
with tf.Graph().as_default():
image, image_raw, label,image_name, label_name = dut.distorted_inputs(params,is_training=is_training)
test_image, test_image_raw, test_label,test_image_name, test_label_name = dut.distorted_inputs(params,is_training=False)
# I'm creating as it is suggested at github slim page:
logits, _ =inception.inception_v2(image, num_classes=N, is_training=True)
tf.get_variable_scope().reuse_variables()
logits_tes, _ =inception.inception_v2(test_image, num_classes=N, is_training=Test)
err=tf.sub(logits, label)
losses = tf.reduce_mean(tf.reduce_sum(tf.square(err)))
# total_loss = model_loss+losses
total_loss = losses+slim.losses.get_total_loss()
test_err=tf.sub(test_logits, test_label)
test_loss= tf.reduce_mean(tf.reduce_sum(tf.square(test_err)))
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
train_op = slim.learning.create_train_op(total_loss, optimizer)
final_loss = slim.learning.train(
train_op,
logdir=params["cp_file"],
init_fn=ut.get_init_fn(slim,params),
number_of_steps=2,
summary_writer=summary_writer
)
this code fails As it can be seen, I don't have loop separately to call my test models, I want to test my model on my test data at each 10th batch.
Does calling train with number_of_steps=10 and then using the evaluation code work?
Related
UPDATE: To solve this, I kept the checkpoint structure the same but wrote a custom train_step function, with the help of the repo linked in the accepted answer of the question linked below, which calculated the gradients and used apply_weights rather than compiling the model and using train_on_batch. This lets the full GAN state be restored. Sadly, with this method I'm fairly sure the dropout layers no longer work as the discriminator is able to work perfectly very early in the training which prevents the model from training properly. Nevertheless, the original problem is solved.
Original:
I am currently training a GAN in keras and trying to make it so that I can save the model and resume training later. Ordinarily in keras you'd simply use model.save(), however for a GAN if the discriminator and GAN (combined generator and discriminator, with discriminator weights not trainable) models are saved and loaded separately then the link between them is broken and the GAN will not function as expected. Someone asked a similar question here, How to save and resume training a GAN with multiple model parts with Tensorflow 2/ Keras, and was told to use tf.train.Checkpoint instead to save the full model at once as a checkpoint.
I've tried implementing this as follows:
def train(epochs, batch_size):
checkpoint = tf.train.Checkpoint(g_optimizer=g_optimizer,
d_optimizer=d_optimizer,
generator=generator,
discriminator=discriminator,
gan=gan
)
ckpt_manager = tf.train.CheckpointManager(checkpoint, 'checkpoints', max_to_keep=3)
if ckpt_manager.latest_checkpoint:
checkpoint.restore(ckpt_manager.latest_checkpoint)
discriminator.compile(loss='binary_crossentropy', optimizer=d_optimizer)
i = Input(shape=(None, latent_dims))
lcs = generator(i)
discriminator.trainable = False
valid = discriminator(lcs)
gan = Model(i, valid)
gan.compile(loss='binary_crossentropy', optimizer=g_optimizer)
for epoch in epochs:
#train discriminator...
#train generator...
ckpt_manager.save()
where g_optimizer, d_optimizer are just tf.keras.optimizers.Adam objects and generator, discriminator and gan are tf.keras.Model objects.
When I use this approach, the link between the gan model and the discriminator is preserved after loading in the checkpoint. The training works normally at first, but after I stop and then resume training using the checkpoint the discriminator loss starts massively increasing and the generated data becomes nonsensical.
Recompiling the models are loading the checkpoint like this was only way I could think of doing it which uses the last state of the optimizer, but clearly something isn't right - rather than resuming the training from where it was, this approach is massively disrupting the training.
Have I used tf.train.Checkpoint incorrectly for what I'm trying to do? Please let me know if there's any more information you need to be able to address the question.
Edit, have added full code by request:
Here is the code that creates the models in the first place and then trains them, in this setup the models are compiled initially when first created, and then compiled again if resuming from a checkpoint using the latest optimizer state. I appreciate it's weird to compile twice but I couldn't think of another way to use the latest optimizer state from the checkpoint, if there's a better way I'm very happy to change it. Note, the unusual GRU-based GAN is because I'm testing out being able to generate variable length time-series. There's a lot of data specific stuff in there but hopefully on the whole it makes sense. train_df is just a pandas DataFrame containing all the training data
def build_generator():
input = Input(shape=(None, latent_dims))
gru1 = GRU(100, activation='relu', return_sequences=True)(input)
gru2 = GRU(100, activation='relu', return_sequences=True (gru1)
output = GRU(9, return_sequences=True, activation='sigmoid')(gru2)
model = Model(input, output)
return model
def build_discriminator():
input = Input(shape=(None, 9))
gru1 = GRU(100, return_sequences=True)(input)
gru2 = GRU(100, return_sequences=True)(gru1)
output = GRU(1, activation='sigmoid')(gru2)
model = Model(input, output)
return model
d_optimizer = opt.Adam(learning_rate=lr)
g_optimizer = opt.Adam(learning_rate=lr)
# Build discriminator
discriminator = build_discriminator()
discriminator.compile(loss='binary_crossentropy', optimizer=d_optimizer)
# Build generator
generator = build_generator()
# Build combined model
i = Input(shape=(None, latent_dims))
lcs = generator(i)
discriminator.trainable = False
valid = discriminator(lcs)
gan = Model(i, valid)
gan.compile(loss='binary_crossentropy', optimizer=g_optimizer)
def train(epochs, batch_size=1): #Only works with batch size of 1 currently
sne = train_df.sn.unique()
n_batches = int(len(sne) / batch_size)
rng = np.random.default_rng(123)
checkpoint = tf.train.Checkpoint(g_optimizer=g_optimizer,
d_optimizer=d_optimizer,
generator=generator,
discriminator=discriminator,
gan=gan
)
ckpt_manager = tf.train.CheckpointManager(checkpoint, 'checkpoints', max_to_keep=3)
if ckpt_manager.latest_checkpoint:
checkpoint.restore(ckpt_manager.latest_checkpoint)
discriminator.compile(loss='binary_crossentropy', optimizer=d_optimizer)
i = Input(shape=(None, latent_dims))
lcs = generator(i)
discriminator.trainable = False
valid = discriminator(lcs)
gan = Model(i, valid)
gan.compile(loss='binary_crossentropy', optimizer=g_optimizer)
for epoch in range(epochs):
rng.shuffle(sne)
g_losses, d_losses = [], []
for batch in range(n_batches):
real = np.random.uniform(0.0, 0.1, (batch_size, 1)) # Used instead of np.zeros to avoid zero gradients
fake = np.random.uniform(0.9, 1.0, (batch_size, 1)) # Used instead of np.ones to avoid zero gradients
# Select real data
sn = sne[batch]
sndf = train_df[train_df.sn == sn]
X = sndf[['g_t', 'r_t', 'i_t', 'z_t', 'g', 'r', 'i', 'z', 'g_err', 'r_err', 'i_err', 'z_err']].values
X = X.reshape((1, *X.shape))
noise = rand.normal(size=(batch_size, latent_dims))
noise = np.reshape(noise, (batch_size, 1, latent_dims))
noise = np.repeat(noise, X.shape[1], 1)
gen_lcs = generator.predict(noise)
# Train discriminator
d_loss_real = discriminator.train_on_batch(X, real)
d_loss_fake = discriminator.train_on_batch(gen_lcs, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# Train generator
noise = rand.normal(size=(2 * batch_size, latent_dims))
noise = np.reshape(noise, (2 * batch_size, 1, latent_dims))
noise = np.repeat(noise, X.shape[1], 1)
gen_labels = np.zeros((2 * batch_size, 1))
g_loss = gan.train_on_batch(noise, gen_labels)
g_losses.append(g_loss)
d_losses.append(d_loss)
ckpt_manager.save()
full_g_loss = np.mean(g_losses)
full_d_loss = np.mean(d_losses)
print(f'{epoch + 1}/{epochs} g_loss={full_g_loss}, d_loss={full_d_loss})
train()
If you have the following checkpoint structure, your model should work properly:
checkpoint_dir = 'checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_opt=generator_opt,
discriminator_opt=discriminator_opt,
gan_opt=gan_opt,
generator=generator,
discriminator=discriminator,
GAN = GAN
)
ckpt_manager = tf.train.CheckpointManager(checkpoint, checkpoint_dir, max_to_keep=3)
if ckpt_manager.latest_checkpoint:
checkpoint.restore(ckpt_manager.latest_checkpoint)
print ('Latest checkpoint restored!!')
Note that the GAN model has its own optimizer. And then in your training loop, just save checkpoints at certain intervals, for example every 10 epochs.
for epoch in range(epochs):
...
...
...
if epoch%10 == 0:
ckpt_manager.save()
I am using DistilBERT to do sentiment analysis on my dataset. The dataset contains text and a label for each row which identifies whether the text is a positive or negative movie review (eg: 1 = positive and 0 = negative). Here is the code from the huggingface documentation (https://huggingface.co/transformers/custom_datasets.html?highlight=imdb)
#This dataset can be explored in the Hugging Face model hub (IMDb), and can be alternatively downloaded with the 🤗 Datasets library with load_dataset("imdb").
wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
tar -xf aclImdb_v1.tar.gz
#This data is organized into pos and neg folders with one text file per example. Let’s write a function that can read this in.
from pathlib import Path
def read_imdb_split(split_dir):
split_dir = Path(split_dir)
texts = []
labels = []
for label_dir in ["pos", "neg"]:
for text_file in (split_dir/label_dir).iterdir():
texts.append(text_file.read_text())
labels.append(0 if label_dir is "neg" else 1)
return texts, labels
train_texts, train_labels = read_imdb_split('aclImdb/train')
test_texts, test_labels = read_imdb_split('aclImdb/test')
from sklearn.model_selection import train_test_split
train_texts, val_texts, train_labels, val_labels = train_test_split(train_texts, train_labels, test_size=.2)
from transformers import DistilBertTokenizerFast
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
train_encodings = tokenizer(train_texts, truncation=True, padding=True)
val_encodings = tokenizer(val_texts, truncation=True, padding=True)
test_encodings = tokenizer(test_texts, truncation=True, padding=True)
import torch
class IMDbDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
train_dataset = IMDbDataset(train_encodings, train_labels)
val_dataset = IMDbDataset(val_encodings, val_labels)
test_dataset = IMDbDataset(test_encodings, test_labels)
#Now that our datasets our ready, we can fine-tune a model either #with the 🤗 Trainer/TFTrainer or with native PyTorch/TensorFlow. See #training.
#Fine-tuning with Trainer
#The steps above prepared the datasets in the way that the trainer is #expected. Now all we need to do is create a model to fine-tune, #define the TrainingArguments/TFTrainingArguments and instantiate a #Trainer/TFTrainer.
from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset # evaluation dataset
)
trainer.train()
#We can also train with Pytorch/Tensorflow
from torch.utils.data import DataLoader
from transformers import DistilBertForSequenceClassification, AdamW
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model = DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
model.to(device)
model.train()
train_loader = DataLoader(train_dataset, batch_size=16, shuffle=True)
optim = AdamW(model.parameters(), lr=5e-5)
for epoch in range(3):
for batch in train_loader:
optim.zero_grad()
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
labels = batch['labels'].to(device)
outputs = model(input_ids, attention_mask=attention_mask, labels=labels)
loss = outputs[0]
loss.backward()
optim.step()
model.eval()
I want to know test this model on a new piece of data. So, I have a dataframe which contains a piece of text/review for each row, and I want to predict the label. Does anyone know how I would go about doing that? I apologize, I am very new to this and would greatly appreciate any help! I tried taking in text, cleaning it, and then doing
prediction = model.predict(text)
and I got an error saying DistilBERT has no attribute .predict.
If you just want to use the model, you can use the corresponding pipeline:
from transformers import pipeline
classifier = pipeline('sentiment-analysis')
Then you can use it:
classifier("I hate this book")
The code that you've shared from the documentation essentially covers the training and evaluation loop. Beware that your shared code contains two ways of fine-tuning, once with the trainer, which also includes evaluation, and once with native Pytorch/TF, which contains just the training portion and not the evaluation portion.
Here is how the native method can be tweaked to generate predictions on the test set:
# Put model in evaluation mode
model.eval()
# Tracking variables for storing ground truth and predictions
predictions , true_labels = [], []
# Prediction Loop
for batch in test_dataset:
# Unpack the inputs from our dataloader and move to GPU/accelerator
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
labels = batch['labels'].to(device)
# Telling the model not to compute or store gradients, saving memory and
# speeding up prediction
with torch.no_grad():
# Forward pass, calculate logit predictions
outputs = model(input_ids, attention_mask=attention_mask,
labels=labels)
logits = outputs[0]
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = labels.to('cpu').numpy()
# Store predictions and true labels
predictions.append(logits)
true_labels.append(label_ids)
After the execution of this loop, predictions will contain logits, i.e., the probability distribution from the model before any form of normalization.
You can use the following to pick the label with the maximum score from the logits, and produce a classification report
from sklearn.metrics import classification_report, accuracy_score
# Combine the results across all batches.
flat_predictions = np.concatenate(predictions, axis=0)
# For each sample, pick the label (0 or 1) with the higher score.
flat_predictions = np.argmax(flat_predictions, axis=1).flatten()
# Combine the correct labels for each batch into a single list.
flat_true_labels = np.concatenate(true_labels, axis=0)
# Accuracy
print(accuracy_score(flat_true_labels, flat_predictions))
# Classification Report
report = classification_report(flat_true_labels, flat_predictions)
For a more elegant way of performing predictions, you can create a BERTModel Class that would contain different methods and variables for handling the tokenization, creation of dataloader, running the predictions, etc.
You can try code like this example: Link-BERT
You'll arrange the dataset according to the BERT model. D Section in this link, you can just change the model name and your dataset.
I have created my own loop as shown in the TF 2 migration guide here.
I am currently able to see the graph for only the --- VISIBLE --- section of the code below. How do I make my model (defined in the ---NOT VISIBLE--- section) visible in tensorboard?
If I was not using a custom training loop, I could have gone with the documented model.fit approach:
model.fit(..., callbacks=[keras.callbacks.TensorBoard(log_dir=logdir)])
In TF 1, the approach used to be quite straightforward:
tf.compat.v1.summary.FileWriter(LOGDIR, sess.graph)
The Tensorboard migration guide clearly states (here) that:
No direct writing of tf.compat.v1.Graph - instead use #tf.function and trace functions
configure_default_gpus()
tf.summary.trace_on(graph=True)
K = tf.keras
dataset = sanity_dataset(BATCH_SIZE)
#-------------------------- NOT VISIBLE -----------------------------------------
model = K.models.Sequential([
K.layers.Flatten(input_shape=(IMG_WIDTH, IMG_HEIGHT, IMG_CHANNELS)),
K.layers.Dense(10, activation=K.layers.LeakyReLU()),
K.layers.Dense(IMG_WIDTH * IMG_HEIGHT * IMG_CHANNELS, activation=K.layers.LeakyReLU()),
K.layers.Reshape((IMG_WIDTH, IMG_HEIGHT, IMG_CHANNELS)),
])
#--------------------------------------------------------------------------------
optimizer = tf.keras.optimizers.Adam()
loss_fn = K.losses.Huber()
#tf.function
def train_step(inputs, targets):
with tf.GradientTape() as tape:
predictions = model(inputs, training=True)
#-------------------------- VISIBLE ---------------------------------------------
pred_loss = loss_fn(targets, predictions)
gradients = tape.gradient(pred_loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
#--------------------------------------------------------------------------------
return pred_loss, predictions
with tf.summary.create_file_writer(LOG_DIR).as_default() as writer:
for epoch in range(5):
for step, (input_batch, target_batch) in enumerate(dataset):
total_loss, predictions = train_step(input_batch, target_batch)
if step == 0:
tf.summary.trace_export(name="all", step=step, profiler_outdir=LOG_DIR)
tf.summary.scalar('loss', total_loss, step=step)
writer.flush()
writer.close()
There's a similar unanswered question where the OP was unable to view any graph.
I'm sure there's a better way, but I just realized that a simple workaround is to just use the existing tensorboard callback logic:
tb_callback = tf.keras.callbacks.TensorBoard(LOG_DIR)
tb_callback.set_model(model) # Writes the graph to tensorboard summaries using an internal file writer
If you want, you could write your own summaries into the same directory it uses: tf.summary.create_file_writer(LOG_DIR + '/train').
I try to modify the code from the Convolutional Neural Network TensorFlow Tutorial to get the single probabilities for each class from each test-images.
What alternative to tf.nn.in_top_k can I use? Because this method returns only one boolean tensor. But I want to preserve the individual values.
I use Tensorflow 1.4 and Python 3.5, I think lines 62-82 and 121-129 / 142 are probably the lines to be modified. Somebody have a hint for me?
Lines 62-82:
def eval_once(saver, summary_writer, top_k_op, summary_op):
"""Run Eval once.
Args:
saver: Saver.
summary_writer: Summary writer.
top_k_op: Top K op.
summary_op: Summary op.
"""
with tf.Session() as sess:
ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
# Restores from checkpoint
saver.restore(sess, ckpt.model_checkpoint_path)
# Assuming model_checkpoint_path looks something like:
# /my-favorite-path/cifar10_train/model.ckpt-0,
# extract global_step from it.
global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]
else:
print('No checkpoint file found')
return
Lines 121-129 + 142
[....]
images, labels = cifar10.inputs(eval_data=eval_data)
# Build a Graph that computes the logits predictions from the
# inference model.
logits = cifar10.inference(images)
# Calculate predictions.
top_k_op = tf.nn.in_top_k(logits, labels, 1)
[....]
You can compute the class probabilities from the raw logits:
# The vector of probabilities per each example in a batch
prediction = tf.nn.softmax(logits)
As a bonus, here's how to get the exact accuracy:
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
I am having all kinds of trouble loading a tensorflow model to test on some new data. When I trained the model, I used this:
save_model_file = 'my_saved_model'
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_file)
This seems to result in the following files being created:
my_saved_model.meta
checkpoint
my_saved_model.index
my_saved_model.data-00000-of-00001
I have no idea which of these files I am supposed to pay attention to.
Now the model is trained, and I can't seem to load it or use it without throwing an exception. Here is what I am doing:
def neural_net_data_input(data_shape):
theshape=(None,)+tuple(data_shape)
return tf.placeholder(tf.float32,shape=theshape,name='x')
def neural_net_label_input(n_out):
return tf.placeholder(tf.float32,shape=(None,n_out),name='one_hot_labels')
def neural_net_keep_prob_input():
return tf.placeholder(tf.float32,name='keep_prob')
def do_generate_network(x):
#
# here is where i generate the network layer by layer.
# this code works fine so i am not showing it here
#
pass
#
# Now I want to restore the model
#
tf.reset_default_graph()
input_data_shape=(32,32,1)
final_num_outputs=43
graph1 = tf.Graph()
with graph1.as_default():
x = neural_net_data_input(input_data_shape)
one_hot_labels = neural_net_label_input(final_num_outputs)
keep_prob=neural_net_keep_prob_input()
logits = do_generate_network(x)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
#
# accuracy: we use this for validation testing
#
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
################################
# Evaluate
################################
new_data=myutils.load_pickle_file(SOME_DATA_FILE_NAME)
new_features=new_data['features']
new_one_hot_labels=new_data['labels']
print('Evaluating on new data...')
with tf.Session(graph=graph1) as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
saver.restore(sess,save_model_file)
new_acc = sess.run(accuracy, feed_dict={x: new_features, one_hot_labels: new_one_hot_labels, keep_prob: 1.})
print('Testing Accuracy For New Images: {}'.format(new_acc))
But when I do this, I get this:
TypeError: Cannot interpret feed_dict key as Tensor: The name 'save/Const:0' refers to a Tensor which does not exist. The operation, 'save/Const', does not exist in the graph.
So, i tried moving my graph inside the session like this:
################################
# Evaluate
################################
print('Evaluating on web data...')
with tf.Session() as sess:
x = neural_net_data_input(input_data_shape)
one_hot_labels = neural_net_label_input(final_num_outputs)
keep_prob=neural_net_keep_prob_input()
logits = do_generate_network(x)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
#
# accuracy: we use this for validation testing
#
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
sess.run(tf.global_variables_initializer())
my_save_dir="/home/carnd/CarND-Traffic-Sign-Classifier-Project"
load_model_meta_file=os.path.join(my_save_dir,"my_saved_model.meta")
load_model_path=os.path.join(my_save_dir,"my_saved_model")
new_saver = tf.train.import_meta_graph(load_model_meta_file)
new_saver.restore(sess, load_model_path)
web_acc = sess.run(accuracy, feed_dict={x: web_features, one_hot_labels: web_one_hot_labels, keep_prob: 1.})
print('Testing Accuracy For Web Images: {}'.format(web_acc))
Now it runs without throwing an error, but the accuracy result it prints is 0.02! I am feeding in the very same data that during training I was getting 95% accuracy on. So it appears I am somehow loading my model incorrectly.
What am I doing wrong?
Steps for loading the trained model:
Load the graph:
You can load the graph using tf.train.import_meta_graph(). An example code would be:
model_path = "my_saved_model"
inference_graph = tf.Graph()
with tf.Session(graph= inference_graph) as sess:
# Load the graph with the trained states
loader = tf.train.import_meta_graph(model_path+'.meta')
loader.restore(sess, model_path)
Get the tensors: Get the tensors need for inference by using get_tensor_by_name(). So in your model make sure you name the tensors by name, so that you can call it during inference.
#Get the tensors by their variable name
_accuracy = inference_graph.get_tensor_by_name('accuracy:0')
_x = inference_graph get_tensor_by_name('x:0')
_y = inference_graph.get_tensor_by_name('y:0')
Test: Can do done by using the tensors loaded. sess.run(_accuracy, feed_dict={_x: ... , _y:...}