Select different modes by string in Tensorflow - python

I am trying to build a VAE network in which I want the model to do different things in different modes. I have three modes: "train", "same" and "different" and a function named interpolation(mode) that does different things depend on the mode. My code looks like:
import tensorflow as tf
### some code here
mode = tf.placeholder(dtype = tf.string, name = "mode")
def interpolation(mode):
if mode == "train":
# do something
print("enter train mode")
elif mode == "same":
# do other things
print("enter same mode")
else:
# do other things
print("enter different mode")
# some other code here
sess.run(feed_dict = {mode: "train"})
sess.run(feed_dict = {mode: "same"})
sess.run(feed_dict = {mode: "different"})
But the output looks like:
enter different mode
enter different mode
enter different mode
which means the mode that gets passed in doesn't change the condition. What have I done wrong? How do I select mode by string argument?

First approach: You can select a different mode by using native Tensorflow switch-case. For example, I assume you have three cases, then you can do:
import tensorflow as tf
mode = tf.placeholder(tf.string, shape=[], name="mode")
def cond1():
return tf.constant('same')
def cond2():
return tf.constant('train')
def cond3():
return tf.constant('diff')
def cond4():
return tf.constant('default')
y = tf.case({tf.equal(mode, 'same'): cond1,
tf.equal(mode, 'train'): cond2,
tf.equal(mode, 'diff'): cond3},
default=cond4, exclusive=True)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(y, feed_dict={mode: "train"}))
print(sess.run(y, feed_dict={mode: "same"}))
Second approach: here is another way to do this with new AutoGraph API:
import tensorflow as tf
from tensorflow.contrib import autograph as ag
m = tf.placeholder(dtype=tf.string, name='mode')
def interpolation(mode):
if mode == "train":
return 'I am train'
elif mode == "same":
return 'I am same'
else:
return 'I am different'
cond_func = ag.to_graph(interpolation)(m)
with tf.Session() as sess:
print(sess.run(cond_func, feed_dict={m: 'same'}))

Related

Is there a way to plot training and validation losses on the same graph with HuggingFace TrainerAPI?

I am fine-tuning a HuggingFace transformer model (PyTorch version), using the HF Seq2SeqTrainingArguments & Seq2SeqTrainer, and I want to display in Tensorboard the train and validation losses (in the same chart).
As far as I understand in order to plot the two losses together I need to use the SummaryWriter. The HF Callbacks documenation describes a TensorBoardCallback function that can receive a tb_writer argument:
https://huggingface.co/docs/transformers/v4.21.1/en/main_classes/callback#transformers.integrations.TensorBoardCallback
However, I cannot figure out what is the right way to use it, if it is even supposed to be used with the Trainer API.
My code looks something like this:
args = Seq2SeqTrainingArguments(
output_dir=output_dir,
evaluation_strategy='epoch',
learning_rate= 1e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=num_train_epochs,
predict_with_generate=True,
logging_steps=logging_steps,
report_to='tensorboard',
push_to_hub=False,
)
trainer = Seq2SeqTrainer(
model,
args,
train_dataset=tokenized_train_data,
eval_dataset=tokenized_val_data,
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
I would assume I should include the callback to TensorBoard in the trainer, e.g.,
callbacks = [TensorBoardCallback(tb_writer=tb_writer)]
but I cannot find a comprehensive example of how to use/what to import to use it.
I also found this feature request on GitHub,
https://github.com/huggingface/transformers/pull/4020
but no example of use, so I am confused...
Any insight will be appreciated
The only way I know of to plot two values on the same TensorBoard graph is to use two separate SummaryWriters with the same root directory. For example, the logging directories might be: log_dir/train and log_dir/eval.
This approach is used in this answer but for TensorFlow instead of pytorch.
In order to do this with the 🤗 Trainer API a custom callback is needed that takes two SummaryWriters. Here is the code for my custom callback CombinedTensorBoardCallback, that I made by modifying the code for TensorBoardCallback:
import os
from transformers.integrations import TrainerCallback, is_tensorboard_available
def custom_rewrite_logs(d, mode):
new_d = {}
eval_prefix = "eval_"
eval_prefix_len = len(eval_prefix)
test_prefix = "test_"
test_prefix_len = len(test_prefix)
for k, v in d.items():
if mode == 'eval' and k.startswith(eval_prefix):
if k[eval_prefix_len:] == 'loss':
new_d["combined/" + k[eval_prefix_len:]] = v
elif mode == 'test' and k.startswith(test_prefix):
if k[test_prefix_len:] == 'loss':
new_d["combined/" + k[test_prefix_len:]] = v
elif mode == 'train':
if k == 'loss':
new_d["combined/" + k] = v
return new_d
class CombinedTensorBoardCallback(TrainerCallback):
"""
A [`TrainerCallback`] that sends the logs to [TensorBoard](https://www.tensorflow.org/tensorboard).
Args:
tb_writer (`SummaryWriter`, *optional*):
The writer to use. Will instantiate one if not set.
"""
def __init__(self, tb_writers=None):
has_tensorboard = is_tensorboard_available()
if not has_tensorboard:
raise RuntimeError(
"TensorBoardCallback requires tensorboard to be installed. Either update your PyTorch version or"
" install tensorboardX."
)
if has_tensorboard:
try:
from torch.utils.tensorboard import SummaryWriter # noqa: F401
self._SummaryWriter = SummaryWriter
except ImportError:
try:
from tensorboardX import SummaryWriter
self._SummaryWriter = SummaryWriter
except ImportError:
self._SummaryWriter = None
else:
self._SummaryWriter = None
self.tb_writers = tb_writers
def _init_summary_writer(self, args, log_dir=None):
log_dir = log_dir or args.logging_dir
if self._SummaryWriter is not None:
self.tb_writers = dict(train=self._SummaryWriter(log_dir=os.path.join(log_dir, 'train')),
eval=self._SummaryWriter(log_dir=os.path.join(log_dir, 'eval')))
def on_train_begin(self, args, state, control, **kwargs):
if not state.is_world_process_zero:
return
log_dir = None
if state.is_hyper_param_search:
trial_name = state.trial_name
if trial_name is not None:
log_dir = os.path.join(args.logging_dir, trial_name)
if self.tb_writers is None:
self._init_summary_writer(args, log_dir)
for k, tbw in self.tb_writers.items():
tbw.add_text("args", args.to_json_string())
if "model" in kwargs:
model = kwargs["model"]
if hasattr(model, "config") and model.config is not None:
model_config_json = model.config.to_json_string()
tbw.add_text("model_config", model_config_json)
# Version of TensorBoard coming from tensorboardX does not have this method.
if hasattr(tbw, "add_hparams"):
tbw.add_hparams(args.to_sanitized_dict(), metric_dict={})
def on_log(self, args, state, control, logs=None, **kwargs):
if not state.is_world_process_zero:
return
if self.tb_writers is None:
self._init_summary_writer(args)
for tbk, tbw in self.tb_writers.items():
logs_new = custom_rewrite_logs(logs, mode=tbk)
for k, v in logs_new.items():
if isinstance(v, (int, float)):
tbw.add_scalar(k, v, state.global_step)
else:
logger.warning(
"Trainer is attempting to log a value of "
f'"{v}" of type {type(v)} for key "{k}" as a scalar. '
"This invocation of Tensorboard's writer.add_scalar() "
"is incorrect so we dropped this attribute."
)
tbw.flush()
def on_train_end(self, args, state, control, **kwargs):
for tbw in self.tb_writers.values():
tbw.close()
self.tb_writers = None
If you want to combine train and eval for other metrics besides the loss then custom_rewrite_logs should be modified accordingly.
As usual, the callback goes in the Trainer constructor. In my test example it was:
trainer = Trainer(
model=rnn,
args=train_args,
train_dataset=train_dataset,
eval_dataset=validation_dataset,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
callbacks=[CombinedTensorBoardCallback]
)
Also you might want to remove the default TensorBoardCallback or else in addition to the combined loss graph, the training loss and validation loss will both appear separately as it does by default.
trainer.remove_callback(TensorBoardCallback)
Here is the resulting TensorBoard view:
it's pretty simple. You mention it in the "Seq2SeqTrainingArguments". There is no need to define it explicitly in the "Seq2SeqTrainer" function.
model_arguments = Seq2SeqTrainingArguments(output_dir= "./best_model/",
num_train_epochs = EPOCHS,
overwrite_output_dir= True,
do_train= True,
do_eval= True,
do_predict= True,
auto_find_batch_size= True,
evaluation_strategy = 'epoch',
warmup_steps = 10000,
logging_dir = "./log_files/",
disable_tqdm = False,
load_best_model_at_end = True,
save_strategy= 'epoch',
save_total_limit = 1,
per_device_eval_batch_size= BATCH_SIZE,
per_device_train_batch_size= BATCH_SIZE,
predict_with_generate=True,
report_to='wandb',
run_name="rober_based_encoder_decoder_text_summarisation"
)
meanwhile you can have other callbacks:
early_stopping = EarlyStoppingCallback(early_stopping_patience= 5,
early_stopping_threshold= 0.001)
Then you pass the arguments and callbacks as the list through the trainer arguments:
trainer = Seq2SeqTrainer(model = model,
compute_metrics= compute_metrics,
args= model_arguments,
train_dataset= Train,
eval_dataset= Val,
tokenizer=tokenizer,
callbacks= [early_stopping, ]
)
Train the model. Make sure you log into the wandb before training
trainer.train()

resave tf1.x saved_model.pb into new tf2.0 saved_model.pb

I have an old trained tf1.x model (let it be Model1), constructed with Placeholders, tf.contrib and so on. I can use this model by restoring graph from .ckpt checkpoint in tf.Session (in tf1.x).
I resolved the easiest way to use the Model1 is to export it:
# tf1.x code
tf.saved_model.simple_save(sess, saved_Model1_path,
inputs={'input':'Placeholder:0'}, outputs={'output':'.../Sigmoid:0'})
I can use the obtained saved_model.pb even in tf2.0:
# tf2.0 code
Model1 = tf.saved_model.load(saved_Model1_path)
out = Model1.signatures['serving_default'](tf.convert_to_tensor(data))['output'].numpy()
out = Model1.signatures['serving_default'].prune('Placeholder:0', '.../Sigmoid:0')(data)
out = Model1.prune('Placeholder:0', '.../Sigmoid:0')(data)
Now imagine, I have a pre/post processing written with tf2.0 tf.function.
I wish the construction of preprocessing -> Model1-> postprocessing to be exported in a single saved_model.pb in tf2.0.
And here come the problems due to saved_model.pb of Model1 utilizes tf.Placeholders (smth like this, I'm not an expert here).
At the same time, I can easily build saved_model.pb from other tf2.0 exported model:
import os
import tensorflow as tf
assert tf.__version__[0] == '2'
class M1(tf.Module):
def __init__(self):
super(M1, self).__init__()
self.v = tf.Variable(2.)
#tf.function(input_signature=[tf.TensorSpec([], tf.float32)])
def M1_func(self, x):
return x * self.v
# build some saved_model.pb
m1 = M1()
path_1 = './save1'
path_to_save = os.path.realpath(path_1)
tf.saved_model.save(m1, path_to_save)
# load built saved_model.pb and check it works
m1 = tf.saved_model.load(path_1)
assert 6 == m1.M1_func(3.).numpy()
# build other saved_model.pb using first saved_model.pb as a part of computing graph
class M2(tf.Module):
def __init__(self):
super(M2, self).__init__()
self.run = m1
self.v = tf.Variable(3.)
#tf.function(input_signature=[tf.TensorSpec([], tf.float32)])
def M2_func(self, x):
return self.run.M1_func(x) * self.v
m2 = M2()
path_2 = './save2'
path_to_save = os.path.realpath(path_2)
tf.saved_model.save(m2, path_to_save)
m2 = tf.saved_model.load(path_2)
assert 18 == m2.M2_func(3.).numpy()
But when I'm trying to do the same, except replacing first saved_model.pb from tf2.0 saving on tf1.x saving, It doesn't work:
# save first saved_model.pb with tf1.x
import tensorflow as tf
assert tf.__version__[0] == '1'
inp = tf.placeholder(shape=[],dtype=tf.float32)
a = tf.Variable(1.5)
out = a*inp
sess = tf.Session()
sess.run(tf.global_variables_initializer())
assert 7.5 == out.eval({inp:5.}, sess)
path_3 = './save3'
path_to_save = os.path.realpath(path_3)
tf.saved_model.simple_save(sess, path_to_save, inputs={'input': inp}, outputs={'output': out})
Now switch to tf2.0 and try to build new saved_model.pb with first one as a part of a computing graph:
import os
import tensorflow as tf
assert tf.__version__[0] == '2'
path_3 = './save3'
path_to_save = os.path.realpath(path_3)
m1 = tf.saved_model.load(path_to_save)
class M2(tf.Module):
def __init__(self):
super(M2, self).__init__()
self.run = m1.signatures['serving_default'].prune('Placeholder:0', 'mul:0')
self.v = tf.Variable(3.)
#tf.function(input_signature=[tf.TensorSpec([], tf.float32)])
def M2_func(self, x):
return self.run(x) * self.v
m2 = M2()
assert 22.5 == m2.M2_func(5.) # ofc eager execution works
# now save M2 to saved_model.pb and check it works (it does not)
path_4 = './save4'
path_to_save = os.path.realpath(path_4)
tf.saved_model.save(m2, path_to_save)
m2 = tf.saved_model.load(path_4)
m2.M2_func(5.) # error:
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value StatefulPartitionedCall/StatefulPartitionedCall/Variable
[[{{node StatefulPartitionedCall/StatefulPartitionedCall/Variable/read}}]] [Op:__inference_restored_function_body_207]
Function call stack:
restored_function_body
So the question is: how to save this architecture in a single saved_model.pb in tf2.0
preprocessing (tf2.0 #tf.function) -> Model1 (saved_model.pb created in tf1.x) -> postprocessing (tf2.0 #tf.function)
The problem was solved. Look at this exporting function and how to use it. This function implementation accepts a single input tensor name and a list of out tensor names.
import tensorflow as tf
def export_tf1(session, in_tnsr_fullname, out_tnsrS_fullname, export_dir='./export'):
assert isinstance(in_tnsr_fullname, str)
assert all([isinstance(out_tnsr_fullname, str) for out_tnsr_fullname in out_tnsrS_fullname])
in_tnsr_name = in_tnsr_fullname.split(':')[0]
out_tnsrS_name = [out_tnsr_fullname.split(':')[0] for out_tnsr_fullname in out_tnsrS_fullname]
graph_def = tf.graph_util.convert_variables_to_constants(session, session.graph.as_graph_def(), out_tnsrS_name)
tf.reset_default_graph()
outs = tf.import_graph_def(graph_def, name="", return_elements=out_tnsrS_fullname)
g = outs[0].graph
builder = tf.saved_model.builder.SavedModelBuilder(export_dir)
with tf.Session(graph=g) as sess:
input_signatures = {in_tnsr_name: g.get_tensor_by_name(in_tnsr_fullname)}
output_signatures = {}
for out_tnsr_name, out_tnsr_fullname in zip(out_tnsrS_name, out_tnsrS_fullname):
output_signatures[out_tnsr_name] = g.get_tensor_by_name(out_tnsr_fullname)
signature = tf.saved_model.signature_def_utils.predict_signature_def(input_signatures, output_signatures)
builder.add_meta_graph_and_variables(
sess,
[tf.saved_model.tag_constants.SERVING],
{tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature},
clear_devices=True
)
builder.save()
How to use the exporting function to receive .pb from tf1_ckpt checkpoint:
import tensorflow as tf
assert tf.__version__[0] == '1'
g = tf.get_default_graph()
sess = tf.Session(graph=g)
ckpt_tf1_path = 'some_directory/name.ckpt' # just an example
tf.train.Saver().restore(sess, ckpt_tf1_path)
input_tensor_name = 'x_tnsr:0' # just an example
out_tensor_name = 'y_tnsr:0' # just an example
export_tf1(sess, input_tensor_name, [out_tensor_name], export_dir)
How to reuse .pb from tf1_ckpt in .pb with tf2.0:
import tensorflow as tf
assert tf.__version__[0] == '2'
class Export(tf.Module):
def __init__(self):
super(Export, self).__init__()
tf1_saved_model_directory = 'directory/saved_model' # just an example
self.tf1_model = tf.saved_model.load(tf1_saved_model_directory)
input_tensor_name = 'x_tnsr:0' # just an example
out_tensor_name = 'y_tnsr:0' # just an example
self.tf1_model = self.tf1_model.prune(input_tensor_name, out_tensor_name)
#tf.function
def __call__(self, x):
out = self.tf1_model(x)
return out
export_dir = './saved_model'
tf.saved_model.save(Export(), export_dir)

Chainer: Cannot classify, trained model(x) throws error

I am using Chainer for written digit recognition. I have trained a model on the MNIST database. However I cannot for some reason classify a single example. My guess was that I didn't choose the right format for the example but I've tried it many ways and I still cannot resolve this issue. Apologies if this is obvious, I'm unexperienced at Python.
This is what the error looks like
Code:
import chainer
import chainer.functions as F
import chainer.links as L
import os
import sys
from chainer.training import extensions
import numpy as np
from tkinter.filedialog import askopenfilename
from PIL import Image
from chainer import serializers
from chainer.dataset import concat_examples
from chainer.backends import cuda
from chainer import Function, gradient_check, report, training, utils, Variable
from chainer import datasets, iterators, optimizers, serializers
from chainer import Link, Chain, ChainList
CONST_RESUME = ''
class Network(chainer.Chain):
def __init__(self, n_units, n_out):
super(Network, self).__init__()
with self.init_scope():
self.l1 = L.Linear(None, n_units)
self.l2 = L.Linear(None, n_units)
self.l3 = L.Linear(None, n_out)
def __call__(self, x):
h1 = F.sigmoid(self.l1(x))
h2 = F.sigmoid(self.l2(h1))
return self.l3(h2)
def query_yes_no(question, default="no"):
"""Ask a yes/no question via raw_input() and return their answer.
"question" is a string that is presented to the user.
"default" is the presumed answer if the user just hits <Enter>.
It must be "yes" (the default), "no" or None (meaning
an answer is required of the user).
The "answer" return value is True for "yes" or False for "no".
"""
valid = {"yes": True, "y": True, "ye": True,
"no": False, "n": False}
if default is None:
prompt = " [y/n] "
elif default == "yes":
prompt = " [Y/n] "
elif default == "no":
prompt = " [y/N] "
else:
raise ValueError("invalid default answer: '%s'" % default)
while True:
sys.stdout.write(question + prompt)
choice = input().lower()
if default is not None and choice == '':
return valid[default]
elif choice in valid:
return valid[choice]
else:
sys.stdout.write("Please respond with 'yes' or 'no' "
"(or 'y' or 'n').\n")
return np.argmax(y.data)
def main():
#file_list = [None]*10
#for i in range(10):
# file_list[i] = open('data{}.txt'.format(i), 'rb')
print('MNIST digit recognition.')
usr_in : str
model = L.Classifier(Network(100, 10))
chainer.backends.cuda.get_device_from_id(0).use()
model.to_gpu()
usr_in = input('Input (t) to train, (l) to load.')
while len(usr_in) != 1 and (usr_in[0] != 'l' or usr_in[0] != 't'):
print('Invalid input.')
usr_in = input()
if usr_in[0] == 't':
optimizer = chainer.optimizers.Adam()
optimizer.setup(model)
train, test = chainer.datasets.get_mnist()
train_iter = chainer.iterators.SerialIterator(train, batch_size=100, shuffle=True)
test_iter = chainer.iterators.SerialIterator(test, 100, repeat=False, shuffle=False)
updater = training.updaters.StandardUpdater(train_iter, optimizer, device=0)
trainer = training.Trainer(updater, (10, 'epoch'), out='out.txt')
trainer.extend(extensions.Evaluator(test_iter, model, device=0))
trainer.extend(extensions.dump_graph('main/loss'))
frequency = 1
trainer.extend(extensions.snapshot(), trigger=(frequency, 'epoch'))
trainer.extend(extensions.LogReport())
if extensions.PlotReport.available():
trainer.extend(extensions.PlotReport(['main/loss', 'validation/main/loss'], 'epoch', file_name='loss.png'))
trainer.extend(extensions.PlotReport(['main/accuracy', 'validation/main/accuracy'], 'epoch', file_name='accuracy.png'))
trainer.extend(extensions.PrintReport(['epoch', 'main/loss', 'validation/main/loss', 'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
trainer.extend(extensions.ProgressBar())
if CONST_RESUME:
chainer.serializers.load_npz(CONST_RESUME, trainer)
trainer.run()
ans = query_yes_no('Would you like to save this network?')
if ans:
usr_in = input('Input filename: ')
serializers.save_npz('{}.{}'.format(usr_in, 'npz'), model)
elif usr_in[0] == 'l':
filename = askopenfilename(initialdir=os.getcwd(), title='Choose a file')
serializers.load_npz(filename, model)
else:
return
while True:
ans = query_yes_no('Would you like to evaluate an image with the current network?')
if ans:
filename = askopenfilename(initialdir=os.getcwd(), title='Choose a file')
file = Image.open(filename)
bw_file = file.convert('L')
size = 28, 28
bw_file.thumbnail(size, Image.ANTIALIAS)
pix = bw_file.load()
x = np.empty([28 * 28])
for i in range(28):
for j in range(28):
x[i * 28 + j] = pix[i, j]
#gpu_id = 0
#batch = (, gpu_id)
x = (x.astype(np.float32))[None, ...]
y = model(x)
print('predicted_label:', y.argmax(axis=1)[0])
else:
return
main()
Can you show us the error message? Otherwise it is difficult to guess where causes the error.
But I guess input shape may be different.
When you use the chainer built-in function to get dataset,
train, test = chainer.datasets.get_mnist()
These dataset image shape is (minibatch, channel, height, width). but it seems you are constructing input x as the shape (minibatch, height * width =28*28=784) which is the different shape??
You can also refer some tutorial for chainer,
chainer handson
deep learning tutorial with chainer
MNIST inference code

TensorFlow: What are the input nodes for tf.Estimator models

I trained a Wide & Deep model using the pre-made Estimator class (DNNLinearCombinedClassifier), by essentially following the tutorial on tensorflow.org.
I wanted to do inference/serving, but without using tensorflow-serving. This basically comes down to feeding some test data to the correct input tensor and retrieving the output tensor.
However, I am not sure what the input nodes/layer should be. In the tensorflow graph (graph.pbtxt), the following nodes seem relevant. But they are also related to the input queue which is mainly used during training, but not necessarily inference (I can just send one instance at a time).
name: "enqueue_input/random_shuffle_queue"
name: "enqueue_input/Placeholder"
name: "enqueue_input/Placeholder_1"
name: "enqueue_input/Placeholder_2"
...
name: "enqueue_input/Placeholder_84"
name: "enqueue_input/random_shuffle_queue_EnqueueMany_1"
name: "enqueue_input/random_shuffle_queue_EnqueueMany_2"
name: "enqueue_input/random_shuffle_queue_EnqueueMany_3"
name: "enqueue_input/random_shuffle_queue_EnqueueMany_4"
name: "enqueue_input/random_shuffle_queue_EnqueueMany"
name: "enqueue_input/sub/y"
name: "enqueue_input/sub"
name: "enqueue_input/Maximum/x"
name: "enqueue_input/Maximum"
name: "enqueue_input/Cast"
name: "enqueue_input/mul/y"
name: "enqueue_input/mul"
Does anyone know the answer? Thanks in advance!
If you want inference, but without using tensorflow-serving, you can just use the tf.estimator.Estimator predict method.
But if you want to do it manually (so that is runs faster), you need a workaround. I am not sure if what I did was exactly the best approach, but it worked. Here's my solution.
1) Let's do the imports and create variables and fake data:
import os
import numpy as np
from functools import partial
import pickle
import tensorflow as tf
N = 10000
EPOCHS = 1000
BATCH_SIZE = 2
X_data = np.random.random((N, 10))
y_data = (np.random.random((N, 1)) >= 0.5).astype(int)
my_dir = os.getcwd() + "/"
2) Define an input_fn, which you will use tf.data.Dataset. Save the tensor names in a dictionary ("input_tensor_map"), which maps the input key to the tensor name.
def my_input_fn(X, y=None, is_training=False):
def internal_input_fn(X, y=None, is_training=False):
if (not isinstance(X, dict)):
X = {"x": X}
if (y is None):
dataset = tf.data.Dataset.from_tensor_slices(X)
else:
dataset = tf.data.Dataset.from_tensor_slices((X, y))
if (is_training):
dataset = dataset.repeat().shuffle(100)
batch_size = BATCH_SIZE
else:
batch_size = 1
dataset = dataset.batch(batch_size)
dataset_iter = dataset.make_initializable_iterator()
if (y is None):
features = dataset_iter.get_next()
labels = None
else:
features, labels = dataset_iter.get_next()
input_tensor_map = dict()
for input_name, tensor in features.items():
input_tensor_map[input_name] = tensor.name
with open(os.path.join(my_dir, 'input_tensor_map.pickle'), 'wb') as f:
pickle.dump(input_tensor_map, f, protocol=pickle.HIGHEST_PROTOCOL)
tf.add_to_collection(tf.GraphKeys.TABLE_INITIALIZERS, dataset_iter.initializer)
return (features, labels) if (not labels is None) else features
return partial(internal_input_fn, X=X, y=y, is_training=is_training)
3) Define your model, to be used in your tf.estimator.Estimator. For example:
def my_model_fn(features, labels, mode):
output = tf.layers.dense(inputs=features["x"], units=1, activation=None)
logits = tf.identity(output, name="logits")
prediction = tf.nn.sigmoid(logits, name="predictions")
classes = tf.to_int64(tf.greater(logits, 0.0), name="classes")
predictions_dict = {
"class": classes,
"probabilities": prediction
}
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions_dict)
one_hot_labels = tf.squeeze(tf.one_hot(tf.cast(labels, dtype=tf.int32), 2))
loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=one_hot_labels, logits=logits)
tf.summary.scalar("loss", loss)
accuracy = tf.reduce_mean(tf.to_float(tf.equal(labels, classes)))
tf.summary.scalar("accuracy", accuracy)
# Configure the Training Op (for TRAIN mode)
if (mode == tf.estimator.ModeKeys.TRAIN):
train_op = tf.train.AdamOptimizer().minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
return tf.estimator.EstimatorSpec(mode=mode, loss=loss)
4) Train and freeze your model. The freeze method is from TensorFlow: How to freeze a model and serve it with a python API, which I added a tiny modification.
def freeze_graph(output_node_names):
"""Extract the sub graph defined by the output nodes and convert
all its variables into constant
Args:
model_dir: the root folder containing the checkpoint state file
output_node_names: a string, containing all the output node's names,
comma separated
"""
if (output_node_names is None):
output_node_names = 'loss'
if not tf.gfile.Exists(my_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
"directory: %s" % my_dir)
if not output_node_names:
print("You need to supply the name of a node to --output_node_names.")
return -1
# We retrieve our checkpoint fullpath
checkpoint = tf.train.get_checkpoint_state(my_dir)
input_checkpoint = checkpoint.model_checkpoint_path
# We precise the file fullname of our freezed graph
absolute_model_dir = "/".join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + "/frozen_model.pb"
# We clear devices to allow TensorFlow to control on which device it will load operations
clear_devices = True
# We start a session using a temporary fresh Graph
with tf.Session(graph=tf.Graph()) as sess:
# We import the meta graph in the current default Graph
saver = tf.train.import_meta_graph(input_checkpoint + '.meta', clear_devices=clear_devices)
# We restore the weights
saver.restore(sess, input_checkpoint)
# We use a built-in TF helper to export variables to constants
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess, # The session is used to retrieve the weights
tf.get_default_graph().as_graph_def(), # The graph_def is used to retrieve the nodes
output_node_names.split(",") # The output node names are used to select the usefull nodes
)
# Finally we serialize and dump the output graph to the filesystem
with tf.gfile.GFile(output_graph, "wb") as f:
f.write(output_graph_def.SerializeToString())
print("%d ops in the final graph." % len(output_graph_def.node))
return output_graph_def
# *****************************************************************************
tf.logging.set_verbosity(tf.logging.INFO)
estimator = tf.estimator.Estimator(model_fn=my_model_fn, model_dir=my_dir)
if (estimator.latest_checkpoint() is None):
estimator.train(input_fn=my_input_fn(X=X_data, y=y_data, is_training=True), steps=EPOCHS)
freeze_graph("predictions,classes")
tf.logging.set_verbosity(tf.logging.INFO)
estimator = tf.estimator.Estimator(model_fn=my_model_fn, model_dir=my_dir)
if (estimator.latest_checkpoint() is None):
estimator.train(input_fn=my_input_fn(X=X_data, y=y_data, is_training=True), steps=EPOCHS)
freeze_graph("predictions,classes")
5) Finally, you can use the frozen graph for inference, input tensors names are in the dictionary that you saved. Again, the method to load the freezed model from TensorFlow: How to freeze a model and serve it with a python API.
def load_frozen_graph(prefix="frozen_graph"):
frozen_graph_filename = os.path.join(my_dir, "frozen_model.pb")
# We load the protobuf file from the disk and parse it to retrieve the
# unserialized graph_def
with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# Then, we import the graph_def into a new Graph and returns it
with tf.Graph().as_default() as graph:
# The name var will prefix every op/nodes in your graph
# Since we load everything in a new graph, this is not needed
tf.import_graph_def(graph_def, name=prefix)
return graph
# *****************************************************************************
X_test = {"x": np.random.random((int(N/2), 10))}
prefix = "frozen_graph"
graph = load_frozen_graph(prefix)
for op in graph.get_operations():
print(op.name)
with open(os.path.join(my_dir, 'input_tensor_map.pickle'), 'rb') as f:
input_tensor_map = pickle.load(f)
with tf.Session(graph=graph) as sess:
input_feed = dict()
for key, tensor_name in input_tensor_map.items():
tensor = graph.get_tensor_by_name(prefix + "/" + tensor_name)
input_feed[tensor] = X_test[key]
logits = graph.get_operation_by_name(prefix + "/logits").outputs[0]
probabilities = graph.get_operation_by_name(prefix + "/predictions").outputs[0]
classes = graph.get_operation_by_name(prefix + "/classes").outputs[0]
logits_values, probabilities_values, classes_values = sess.run([logits, probabilities, classes], feed_dict=input_feed)

Creating custom error function in CNTK

This is part of my current python code for NN training in python using CNTK module
batch_axis = C.Axis.default_batch_axis()
input_seq_axis = C.Axis.default_dynamic_axis()
input_dynamic_axes = [batch_axis, input_seq_axis]
input_dynamic_axes2 = [batch_axis, input_seq_axis]
input = C.input_variable(n_ins, dynamic_axes=input_dynamic_axes, dtype=numpy.float32)
output = C.input_variable(n_outs, dynamic_axes=input_dynamic_axes2, dtype=numpy.float32)
dnn_model = cntk_model.create_model(input, hidden_layer_type, hidden_layer_size, n_outs)
loss = C.squared_error(dnn_model, output)
error = C.squared_error(dnn_model, output)
lr_schedule = C.learning_rate_schedule(current_finetune_lr, C.UnitType.minibatch)
momentum_schedule = C.momentum_schedule(current_momentum)
learner = C.adam(dnn_model.parameters, lr_schedule, momentum_schedule, unit_gain = False, l1_regularization_weight=l1_reg, l2_regularization_weight= l2_reg)
trainer = C.Trainer(dnn_model, (loss, error), [learner])
And here is code for creating NN model
def create_model(features, hidden_layer_type, hidden_layer_size, n_out):
logger.debug('Creating cntk model')
assert len(hidden_layer_size) == len(hidden_layer_type)
n_layers = len(hidden_layer_size)
my_layers = list()
for i in xrange(n_layers):
if(hidden_layer_type[i] == 'TANH'):
my_layers.append(C.layers.Dense(hidden_layer_size[i], activation=C.tanh, init=C.layers.glorot_uniform()))
elif (hidden_layer_type[i] == 'LSTM'):
my_layers.append(C.layers.Recurrence(C.layers.LSTM(hidden_layer_size[i])))
else:
raise Exception('Unknown hidden layer type')
my_layers.append(C.layers.Dense(n_out, activation=None))
my_model = C.layers.Sequential([my_layers])
my_model = my_model(features)
return my_model
Now, I would like to change a backpropagation, so when the error is calculated not direct network output is used, but the output after some additional calculation. I tried to define something like this
def create_error_function(self, prediction, target):
prediction_denorm = C.element_times(prediction, self.std_vector)
prediction_denorm = C.plus(prediction_denorm, self.mean_vector)
prediction_denorm_rounded = C.round(C.element_times(prediction_denorm[0:5], C.round(prediction_denorm[5])))
prediction_denorm_rounded = C.element_divide(prediction_denorm_rounded, C.round(prediction_denorm[5]))
prediction_norm = C.minus(prediction_denorm_rounded, self.mean_vector[0:5])
prediction_norm = C.element_divide(prediction_norm, self.std_vector[0:5])
first = C.squared_error(prediction_norm, target[0:5])
second = C.minus(C.round(prediction_denorm[5]), self.mean_vector[5])
second = C.element_divide(second, self.std_vector[5])
return C.plus(first, C.squared_error(second, target[5]))
and use it instead standard squared_error.
And the part for NN training
dnn_model = cntk_model.create_model(input, hidden_layer_type, hidden_layer_size, n_outs)
error_function = cntk_model.ErrorFunction(cmp_mean_vector, cmp_std_vector)
loss = error_function.create_error_function(dnn_model, output)
error = error_function.create_error_function(dnn_model, output)
lr_schedule = C.learning_rate_schedule(current_finetune_lr, C.UnitType.minibatch)
momentum_schedule = C.momentum_schedule(current_momentum)
learner = C.adam(dnn_model.parameters, lr_schedule, momentum_schedule, unit_gain = False, l1_regularization_weight=l1_reg,
l2_regularization_weight= l2_reg)
trainer = C.Trainer(dnn_model, (loss, error), [learner])
trainer.train_minibatch({input: temp_train_x, output: temp_train_y})
But after two epochs I start gettting always the same average loss, as my network is not learning
Every time you want to change how backprop works, you need to use stop_gradient. This is the only function whose gradient is different from the gradient of the operation of the forward pass. In the forward pass stop_gradient acts as identity. In the backward pass it blocks the gradient from propagating.
To do an operation f(x) on some x in the forward pass and pretend as if it never happened in the backward pass you need to do something like:
C.stop_gradient(f(x) - x) + x. In your case that would be
norm_features = C.stop_gradient(features/normalization - features) + features

Categories

Resources