Add custom pre-process within the Tensorflow graph - python

I have a meta and checkpoint files from which I load the weights of a pre-trained model. This works fine.
To test this model with a new image I need to do some pre-processing for the image (converting from grayscale to RGB, etc) which is basically done using opencv library. Doing this I do get my desired output.
But now what I want to do is add this pre-processing code to tensorflow itself so that when I save this model and re-use it I can only pass the image path as an argument and I don't need to do any pre-processing before passing it to tensorflow. I want tensorflow to handle all this.
I have tried the following already
The following I have used to implement the preprocessing the images within the tensorflow itself and save the new meta and checkpoint files
graph = tf.Graph()
with graph.as_default():
def dataprocess(x):
#convert from gray to rgb,etc
return y
path = ["images/test.jpg"]
filenames = tf.constant(path)
dataset = tf.contrib.data.Dataset.from_tensor_slices((filenames))
dataset = dataset.map(
lambda path : tf.py_func(
dataprocess[path], [tf.float32]))
iterator = dataset.make_initializable_iterator()
next_element = iterator.get_next()
next_element = tf.reshape(next_element,[-1,3,224,224]) #reshape as tensorflow shows unknown
The below code I use to restore my previous model
with tf.Session(graph=graph) as sess:
sess.run(iterator.initializer)
element1 = sess.run(next_element)
saver = tf.train.import_meta_graph('./.meta')
saver.restore(sess,'./')
saver1 = tf.train.Saver()
input= graph.get_tensor_by_name('input_1:0')
output= graph.get_tensor_by_name('predictions/Sigmoid:0')
print(sess.run(output,{input:element1}))
saver1.save(sess,'/tmp/test1/')
This all works fine
Next, I use the newly created meta and checkpoint files to test for an image(path)
path = ["images/test.jpg"]
with tf.Session() as sess:
saver = tf.train.import_meta_graph('./.meta')
saver.restore(sess,'./')
graph = tf.get_default_graph()
input = graph.get_tensor_by_name('Const:0')
output= graph.get_tensor_by_name('predictions/Sigmoid:0')
print(sess.run(output,{input:path}))
Using I get the following error
InvalidArgumentError: You must feed a value for placeholder tensor 'input_1' with dtype float and shape [?,3,224,224]
[[Node: input_1 = Placeholder[dtype=DT_FLOAT, shape=[?,3,224,224], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
input_1 corresponds to the above input of the graph
So I am assuming I am not passing the path to the correct place
I am new to tensorflow and there is pretty less documentation regarding this
Thank you

But now what I want to do is add this pre-processing code to
tensorflow itself so that when I save this model and re-use it I can
only pass the image path as an argument and I don't need to do any
pre-processing before passing it to tensorflow. I want tensorflow to
handle all this.
Unfortunately this is not possible if you use the tf.py_func operation. When you save the graph the python code inside the tf.py_func will not be saved as it is not part of the graph. (See py_func limitations here.) The only way to make the pre-processing part of the graph would be to rewrite it with tensorflow code without using the tf.py_func.

Related

how Convert my tensorflow 2 using tflearn model to graph.pb file

I am trying to convert my model to CoreML so I save my model using this code
model2.save("modelcnn2.tfl")
then giving three model files as follow:
checkpoint
modelcnn2.tfl.data-00000-of-00001
modelcnn2.tfl.index
modelcnn2.tfl.meta
so how can convert to one graph.pb then convert to CoreMl
I use this code
import tensorflow as tf
meta_path = '/content/drive/MyDrive/check/modelcnn2.tfl.meta' # Your .meta file
output_node_names = ['0',
'1','2','3','4','5','6','7','8','9','10','11','12','13','14','15','16','17','18',
'19''20','21','22','23','24','25','26','27','28'] # Output nodes
with tf.compat.v1.Session() as sess:
# Restore the graph
saver = tf.compat.v1.train.import_meta_graph(meta_path)
# Load weights
saver.restore(sess,tf.train.latest_checkpoint('path/of/your/.meta/file'))
# Freeze the graph
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph
with open('output_graph.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
but this error appeared
KeyError: "The name 'Adam' refers to an Operation not in the graph."
so if any suggestion helps me, it is appreciated
The error is telling you exactly what the issue is.
Adam is not a supported operation. You have two options:
(1) Create a new model without an Adam layer.
(2) Implement a Custom Operators.

Concatenating two saved models in tensorflow 1.13 [duplicate]

I've trained a DCGAN model and would now like to load it into a library that visualizes the drivers of neuron activation through image space optimization.
The following code works, but forces me to work with (1, width, height, channels) images when doing subsequent image analysis, which is a pain (the library assumptions about the shape of network input).
# creating TensorFlow session and loading the model
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
new_saver = tf.train.import_meta_graph(model_fn)
new_saver.restore(sess, './')
I'd like to change the input_map, After reading the source, I expected this code to work:
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
t_input = tf.placeholder(np.float32, name='images') # define the input tensor
t_preprocessed = tf.expand_dims(t_input, 0)
new_saver = tf.train.import_meta_graph(model_fn, input_map={'images': t_input})
new_saver.restore(sess, './')
But got an error:
ValueError: tf.import_graph_def() requires a non-empty name if input_map is used.
When the stack gets down to tf.import_graph_def() the name field is set to import_scope, so I tried the following:
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
t_input = tf.placeholder(np.float32, name='images') # define the input tensor
t_preprocessed = tf.expand_dims(t_input, 0)
new_saver = tf.train.import_meta_graph(model_fn, input_map={'images': t_input}, import_scope='import')
new_saver.restore(sess, './')
Which netted me the following KeyError:
KeyError: "The name 'gradients/discriminator/minibatch/map/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/RefEnter:0' refers to a Tensor which does not exist. The operation, 'gradients/discriminator/minibatch/map/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/RefEnter', does not exist in the graph."
If I set 'import_scope', I get the same error whether or not I set 'input_map'.
I'm not sure where to go from here.
In the newer version of tensorflow>=1.2.0, the following step works fine.
t_input = tf.placeholder(np.float32, shape=[None, width, height, channels], name='new_input') # define the input tensor
# here you need to give the name of the original model input placeholder name
# For example if the model has input as; input_original= tf.placeholder(tf.float32, shape=(1, width, height, channels, name='original_placeholder_name'))
new_saver = tf.train.import_meta_graph(/path/to/checkpoint_file.meta, input_map={'original_placeholder_name:0': t_input})
new_saver.restore(sess, '/path/to/checkpointfile')
So, the main issue is that you're not using the syntax right. Check the documentation for tf.import_graph_def for the use of input_map (link).
Let's breakdown this line:
new_saver = tf.train.import_meta_graph(model_fn, input_map={'images': t_input}, import_scope='import')
You didn't outline what model_fn is, but it needs to be a path to the file.
For the next part, in input_map, you're saying: replace the input in the original graph (DCGAN) whose name is images with my variable (in the current graph) called t_input. Problematically, t_input and images are referencing the same object in different ways as per this line:
t_input = tf.placeholder(np.float32, name='images')
In other words, images in input_map should actually be whatever the variable name is that you're trying to replace in the DCGAN graph. You'll have to import the graph in its base form (i.e., without the input_map line) and figure out what the name of the variable you want to link to is. It'll be in the list returned by tf.get_collection('variables') after you have imported the graph. Look for the dimensions (1, width, height, channels), but with the values in place of the variable names. If it's a placeholder, it'll look something like scope/Placeholder:0 where scope is replaced with whatever the variable's scope is.
Word of caution:
Tensorflow is very finicky about what it expects graphs to look like. So, if in the original graph specification the width, height, and channels are explicitly specified, then Tensorflow will complain (throw an error) when you try to connect a placeholder with a different set of dimensions. And, this makes sense. If the system was trained with some set of dimensions, then it only knows how to generate images with those dimensions.
In theory, you can still stick all kinds of weird stuff on the front of that network. But, you will need to scale it down so it meets those dimensions first (and the Tensorflow documentation says it's better to do that with the CPU outside of the graph; i.e., before inputing it with feed_dict).
Hope that helps!

How do I convert a keras model into a protocol buffer (.pb) file?

I have a trained keras model that I would like to save to a protocol buffer (.pb) file. When I do so and load the model the predictions are wrong (and different from the original model) and the weights are wrong. Here is the model type:
type(model)
> keras.engine.training.Model
Here is the code I used to freeze and save it to a .pb file.
from keras import backend as K
K.set_learning_phase(0)
import tensorflow as tf
from tensorflow.python.framework.graph_util import convert_variables_to_constants
keras_session = K.get_session()
graph = keras_session.graph
graph.as_default()
keep_var_names=None
output_names=[out.op.name for out in model.outputs]
clear_devices=True
with graph.as_default():
freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
output_names = output_names or []
output_names += [v.op.name for v in tf.global_variables()]
input_graph_def = graph.as_graph_def()
if clear_devices:
for node in input_graph_def.node:
node.device = ""
frozen_graph = convert_variables_to_constants(keras_session, input_graph_def,
output_names, freeze_var_names)
tf.train.write_graph(frozen_graph, "model", "my_model.pb", as_text=False)
Then I read it like so:
pb_file = 'my_model.pb'
with tf.gfile.GFile(pb_file, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
ops = graph.get_operations()
def get_outputs(feed_dict, output_tensor):
with tf.Session() as sess:
sess.graph.as_default()
tf.import_graph_def(graph_def, name='')
output_tensor_loc = sess.graph.get_tensor_by_name(output_tensor)
out = sess.run(output_tensor_loc, feed_dict=feed_dict)
print("Shape is ", out.shape)
return out
Then, when I compare the weights at the first convolutional layer, they have the same shape (and the shape looks correct) but the weights are different. All the weights are approximately 0:3 while in the original model at the same layer they are approximately -256:256.
get_outputs(feed_dict, 'conv1_relu/Relu:0')
Is there something wrong in the above code? Or is this whole approach wrong? I saw in a blog post someone using tf.train.Saver, which I'm not doing. Do I need to do that? If so, how can I do that to my keras.engine.training.Model?
Q: Is there something wrong in the above code? Or is this whole approach wrong?
A: The main problem is that tf.train.write_graph saves the TensorFlow graph, but not the weights of your model.
Q: Do I need to do use tf.train.Saver? If so, how can I do that to my model?
A: Yes. In addition to saving the graph (which is only necessary if your subsequent scripts do not explicitly recreate it), you should use tf.train.Saver to save the weights of your model:
from keras import backend as K
# ... define your model in Keras and do some work
# Add ops to save and restore all the variables.
saver = tf.train.Saver() # setting var_list=None saves all variables
# Get TensorFlow session
sess = K.get_session()
# save the model's variables
save_path = saver.save(sess, "/tmp/model.ckpt")
Calling saver.save also saves a MetaGraphDef which can then be used to restore the graph, so it is not necessary for you to use tf.train.write_graph. To restore the weights, simply use saver.restore:
with tf.Session() as sess:
# restore variables from disk
saver.restore(sess, "/tmp/model.ckpt")
The fact that you are using a Keras model does not change this approach as long as you use the TensorFlow backend (you still have a TensorFlow graph and weights). For more information about saving and restoring models in TensorFlow, please see the save and restore tutorial.
Alternative (neater) way to save a Keras model
Now, since you are using a Keras model, it is perhaps more convenient to save the model with model.save('model_path.h5') and restore it as follows:
from keras.models import load_model
# restore previously saved model
model = load_model('model_path.h5')
UPDATE: Generating a single .pb file from the .ckpt files
If you want to generate a single .pb file, please use the former tf.train.Saver approach. Once you have generated the .ckpt files (.meta holds the graph and .data the weights), you can get the .pb file by calling Morgan's function freeze_graph as follows:
freeze_graph('/tmp', '<Comma separated output node names>')
References:
Save and restore in TensorFlow.
StackOverflow answer to TensorFlow saving into/loading a graph from a file.
Saving/loading whole models in Keras.
Morgan's function to generate a .pb file from the .ckpt files.

Tensorflow: Feeding large datasets of JPEGs to frozen inference graphs

I have a frozen graph in tensorflow that is set up to take (batchsize, 224,224, 3) as input. Thus not taking an input function. I want to change it to take 50000 real images from a data folder. How do I feed those images into the metagraph without going over the memory limit?
I know that I could feed it as an input function but that would mean changing my frozen graph. Any suggestions of avoiding that?
Is the only way to convert the input to take in a function. I would like not to modify the frozen graphs.
This is the code I use to load my graph:
dummy_input = np.random.random_sample((batch_size,224,224,3))
tf.reset_default_graph()
g = tf.Graph()
outlist=[]
with g.as_default():
#processing the data
inc=tf.constant(dummy_input, dtype=tf.float32)
dataset=tf.data.Dataset.from_tensors(inc)
dataset=dataset.repeat()
iterator=dataset.make_one_shot_iterator()
next_element=iterator.get_next()
# loading the graph
out = tf.import_graph_def(
graph_def=gdef,
input_map={input_layer:next_element}, # input layer name of the model is required fro this
return_elements=[output_layer] # here is output layer name.
)
out = out[0].outputs[0]
outlist.append(out)

Lasagne/Theano, problems loading pickled model

I think I'm losing my mind at this point.
I'm using Lasagne for a small convolutional neural network. It trains perfectly, I can compute the error on training and validation set as well, but I cannot save the trained model on the disk. Better, I can save it and load it, but I cannot use it to predict for new data.
This is what I do after training
model = {'network': network, 'params': get_all_params(network), 'params_values': get_all_param_values(network)}
pickle.dump(model, open('models/model_1.pkl', 'wb'), protocol=pickle.HIGHEST_PROTOCOL)
And this is what I do to load the model
with open('models/model.pkl', 'rb') as pickle_file:
model = pickle.load(pickle_file)
network = model['network']
values = model['params_values']
set_all_param_values(network, values)
T_input = T.tensor4('input', dtype='float32')
T_target = T.ivector('target')
predictions = get_output(network, deterministic=True)
loss = (cross_entropy(predictions, T_target)).mean()
acc = T.mean(T.eq(T.argmax(predictions, axis=1), T_target), dtype=config.floatX)
test_fn = function([T_input, T_target], [loss, acc])
I cannot even pass the real numpy input, that I get this error
theano.compile.function_module.UnusedInputError: theano.function was asked to create a
function computing outputs given certain inputs, but the provided input variable at index 0
is not part of the computational graph needed to compute the outputs: input.
To make this error into a warning, you can pass the parameter
on_unused_input='warn' to theano.function. To disable it completely, use
on_unused_input='ignore'.
I tried to set the parameter on_unused_input='warn' then, and this is the result
theano.gof.fg.MissingInputError: An input of the graph, used to compute (..)
was not provided and not given a value.Use the Theano flag
exception_verbosity='high',for more information on this error.
The problem is that your T_input is not tied to the input layer and hence theano can't compile it
T_input = lasagne.layers.get_all_layers(network)[0].input_var

Categories

Resources